Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Direction Specific Biases in Human Visual and Vestibular Heading Perception

Identifieur interne : 002239 ( Pmc/Curation ); précédent : 002238; suivant : 002240

Direction Specific Biases in Human Visual and Vestibular Heading Perception

Auteurs : Benjamin T. Crane [États-Unis]

Source :

RBID : PMC:3517556

Abstract

Heading direction is determined from visual and vestibular cues. Both sensory modalities have been shown to have better direction discrimination for headings near straight ahead. Previous studies of visual heading estimation have not used the full range of stimuli, and vestibular heading estimation has not previously been reported. The current experiments measure human heading estimation in the horizontal plane to vestibular, visual, and spoken stimuli. The vestibular and visual tasks involved 16 cm of platform or visual motion. The spoken stimulus was a voice command speaking a heading angle. All conditions demonstrated direction dependent biases in perceived headings such that biases increased with headings further from the fore-aft axis. The bias was larger with the visual stimulus when compared with the vestibular stimulus in all 10 subjects. For the visual and vestibular tasks precision was best for headings near fore-aft. The spoken headings had the least bias, and the variation in precision was less dependent on direction. In a separate experiment when headings were limited to ±45°, the biases were much less, demonstrating the range of headings influences perception. There was a strong and highly significant correlation between the bias curves for visual and spoken stimuli in every subject. The correlation between visual-vestibular and vestibular-spoken biases were weaker but remained significant. The observed biases in both visual and vestibular heading perception qualitatively resembled predictions of a recent population vector decoder model (Gu et al., 2010) based on the known distribution of neuronal sensitivities.


Url:
DOI: 10.1371/journal.pone.0051383
PubMed: 23236490
PubMed Central: 3517556

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3517556

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Direction Specific Biases in Human Visual and Vestibular Heading Perception</title>
<author>
<name sortKey="Crane, Benjamin T" sort="Crane, Benjamin T" uniqKey="Crane B" first="Benjamin T." last="Crane">Benjamin T. Crane</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Otolaryngology, University of Rochester, Rochester, New York, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Otolaryngology, University of Rochester, Rochester, New York</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Department of Neurobiology and Anatomy, University of Rochester, Rochester, New York, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neurobiology and Anatomy, University of Rochester, Rochester, New York</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<addr-line>Department of Bioengineering, University of Rochester, Rochester, New York, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Bioengineering, University of Rochester, Rochester, New York</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">23236490</idno>
<idno type="pmc">3517556</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3517556</idno>
<idno type="RBID">PMC:3517556</idno>
<idno type="doi">10.1371/journal.pone.0051383</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">002239</idno>
<idno type="wicri:Area/Pmc/Curation">002239</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Direction Specific Biases in Human Visual and Vestibular Heading Perception</title>
<author>
<name sortKey="Crane, Benjamin T" sort="Crane, Benjamin T" uniqKey="Crane B" first="Benjamin T." last="Crane">Benjamin T. Crane</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Otolaryngology, University of Rochester, Rochester, New York, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Otolaryngology, University of Rochester, Rochester, New York</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Department of Neurobiology and Anatomy, University of Rochester, Rochester, New York, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neurobiology and Anatomy, University of Rochester, Rochester, New York</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<addr-line>Department of Bioengineering, University of Rochester, Rochester, New York, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Bioengineering, University of Rochester, Rochester, New York</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Heading direction is determined from visual and vestibular cues. Both sensory modalities have been shown to have better direction discrimination for headings near straight ahead. Previous studies of visual heading estimation have not used the full range of stimuli, and vestibular heading estimation has not previously been reported. The current experiments measure human heading estimation in the horizontal plane to vestibular, visual, and spoken stimuli. The vestibular and visual tasks involved 16 cm of platform or visual motion. The spoken stimulus was a voice command speaking a heading angle. All conditions demonstrated direction dependent biases in perceived headings such that biases increased with headings further from the fore-aft axis. The bias was larger with the visual stimulus when compared with the vestibular stimulus in all 10 subjects. For the visual and vestibular tasks precision was best for headings near fore-aft. The spoken headings had the least bias, and the variation in precision was less dependent on direction. In a separate experiment when headings were limited to ±45°, the biases were much less, demonstrating the range of headings influences perception. There was a strong and highly significant correlation between the bias curves for visual and spoken stimuli in every subject. The correlation between visual-vestibular and vestibular-spoken biases were weaker but remained significant. The observed biases in both visual and vestibular heading perception qualitatively resembled predictions of a recent population vector decoder model (Gu et al., 2010) based on the known distribution of neuronal sensitivities.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Wh" uniqKey="Warren W">WH Warren</name>
</author>
<author>
<name sortKey="Morris, Mw" uniqKey="Morris M">MW Morris</name>
</author>
<author>
<name sortKey="Kalish, M" uniqKey="Kalish M">M Kalish</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Crowell, Ja" uniqKey="Crowell J">JA Crowell</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Britten, Kh" uniqKey="Britten K">KH Britten</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Wh" uniqKey="Warren W">WH Warren</name>
</author>
<author>
<name sortKey="Hannon, Dj" uniqKey="Hannon D">DJ Hannon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Wh" uniqKey="Warren W">WH Warren</name>
</author>
<author>
<name sortKey="Kurtz, Kj" uniqKey="Kurtz K">KJ Kurtz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Crowell, Ja" uniqKey="Crowell J">JA Crowell</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Adeyemo, B" uniqKey="Adeyemo B">B Adeyemo</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, L" uniqKey="Li L">L Li</name>
</author>
<author>
<name sortKey="Sweet, Bt" uniqKey="Sweet B">BT Sweet</name>
</author>
<author>
<name sortKey="Stone, Ls" uniqKey="Stone L">LS Stone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="D Avossa, G" uniqKey="D Avossa G">G D'Avossa</name>
</author>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D Kersten</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, L" uniqKey="Li L">L Li</name>
</author>
<author>
<name sortKey="Peli, E" uniqKey="Peli E">E Peli</name>
</author>
<author>
<name sortKey="Warren, Wh" uniqKey="Warren W">WH Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Telford, L" uniqKey="Telford L">L Telford</name>
</author>
<author>
<name sortKey="Howard, Ip" uniqKey="Howard I">IP Howard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnston, Ir" uniqKey="Johnston I">IR Johnston</name>
</author>
<author>
<name sortKey="White, Gr" uniqKey="White G">GR White</name>
</author>
<author>
<name sortKey="Cumming, Rw" uniqKey="Cumming R">RW Cumming</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Llewellyn, Kr" uniqKey="Llewellyn K">KR Llewellyn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, R" uniqKey="Warren R">R Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koenderink, Jj" uniqKey="Koenderink J">JJ Koenderink</name>
</author>
<author>
<name sortKey="Van Doorn, Aj" uniqKey="Van Doorn A">AJ van Doorn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Girshick, Ar" uniqKey="Girshick A">AR Girshick</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocker, Aa" uniqKey="Stocker A">AA Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macneilage, Pr" uniqKey="Macneilage P">PR MacNeilage</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Wang, S" uniqKey="Wang S">S Wang</name>
</author>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ivanenko, Yp" uniqKey="Ivanenko Y">YP Ivanenko</name>
</author>
<author>
<name sortKey="Grasso, R" uniqKey="Grasso R">R Grasso</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Britten, Kh" uniqKey="Britten K">KH Britten</name>
</author>
<author>
<name sortKey="Van Wezel, Rj" uniqKey="Van Wezel R">RJ van Wezel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chowdhury, Sa" uniqKey="Chowdhury S">SA Chowdhury</name>
</author>
<author>
<name sortKey="Takahashi, K" uniqKey="Takahashi K">K Takahashi</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duffy, Cj" uniqKey="Duffy C">CJ Duffy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Georgopoulos, Ap" uniqKey="Georgopoulos A">AP Georgopoulos</name>
</author>
<author>
<name sortKey="Schwartz, Ab" uniqKey="Schwartz A">AB Schwartz</name>
</author>
<author>
<name sortKey="Kettner, Re" uniqKey="Kettner R">RE Kettner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Page, Wk" uniqKey="Page W">WK Page</name>
</author>
<author>
<name sortKey="Duffy, Cj" uniqKey="Duffy C">CJ Duffy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sanger, Td" uniqKey="Sanger T">TD Sanger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grabherr, L" uniqKey="Grabherr L">L Grabherr</name>
</author>
<author>
<name sortKey="Nicoucar, K" uniqKey="Nicoucar K">K Nicoucar</name>
</author>
<author>
<name sortKey="Mast, Fw" uniqKey="Mast F">FW Mast</name>
</author>
<author>
<name sortKey="Merfeld, Dm" uniqKey="Merfeld D">DM Merfeld</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Turner, Ah" uniqKey="Turner A">AH Turner</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roditi, Re" uniqKey="Roditi R">RE Roditi</name>
</author>
<author>
<name sortKey="Crane, Bt" uniqKey="Crane B">BT Crane</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benson, Aj" uniqKey="Benson A">AJ Benson</name>
</author>
<author>
<name sortKey="Hutt, Ec" uniqKey="Hutt E">EC Hutt</name>
</author>
<author>
<name sortKey="Brown, Sf" uniqKey="Brown S">SF Brown</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wichmann, Fa" uniqKey="Wichmann F">FA Wichmann</name>
</author>
<author>
<name sortKey="Hill, Nj" uniqKey="Hill N">NJ Hill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roggeveen, Lj" uniqKey="Roggeveen L">LJ Roggeveen</name>
</author>
<author>
<name sortKey="Nijhoff, P" uniqKey="Nijhoff P">P Nijhoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carriot, J" uniqKey="Carriot J">J Carriot</name>
</author>
<author>
<name sortKey="Bryan, A" uniqKey="Bryan A">A Bryan</name>
</author>
<author>
<name sortKey="Dizio, P" uniqKey="Dizio P">P DiZio</name>
</author>
<author>
<name sortKey="Lackner, Jr" uniqKey="Lackner J">JR Lackner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benson, Aj" uniqKey="Benson A">AJ Benson</name>
</author>
<author>
<name sortKey="Brown, Sf" uniqKey="Brown S">SF Brown</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clark, B" uniqKey="Clark B">B Clark</name>
</author>
<author>
<name sortKey="Stewart, Jd" uniqKey="Stewart J">JD Stewart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dockstader, Sl" uniqKey="Dockstader S">SL Dockstader</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huang, J" uniqKey="Huang J">J Huang</name>
</author>
<author>
<name sortKey="Young, Lr" uniqKey="Young L">LR Young</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Royden, Cs" uniqKey="Royden C">CS Royden</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
<author>
<name sortKey="Crowell, Ja" uniqKey="Crowell J">JA Crowell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bardy, Bg" uniqKey="Bardy B">BG Bardy</name>
</author>
<author>
<name sortKey="Warren, Wh" uniqKey="Warren W">WH Warren</name>
</author>
<author>
<name sortKey="Kay, Ba" uniqKey="Kay B">BA Kay</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Watkins, Pv" uniqKey="Watkins P">PV Watkins</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duffy, Cj" uniqKey="Duffy C">CJ Duffy</name>
</author>
<author>
<name sortKey="Wurtz, Rh" uniqKey="Wurtz R">RH Wurtz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lappe, M" uniqKey="Lappe M">M Lappe</name>
</author>
<author>
<name sortKey="Bremmer, F" uniqKey="Bremmer F">F Bremmer</name>
</author>
<author>
<name sortKey="Pekel, M" uniqKey="Pekel M">M Pekel</name>
</author>
<author>
<name sortKey="Thiele, A" uniqKey="Thiele A">A Thiele</name>
</author>
<author>
<name sortKey="Hoffmann, Kp" uniqKey="Hoffmann K">KP Hoffmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fernandez, C" uniqKey="Fernandez C">C Fernandez</name>
</author>
<author>
<name sortKey="Goldberg, Jm" uniqKey="Goldberg J">JM Goldberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rosenhall, U" uniqKey="Rosenhall U">U Rosenhall</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">23236490</article-id>
<article-id pub-id-type="pmc">3517556</article-id>
<article-id pub-id-type="publisher-id">PONE-D-12-29880</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0051383</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sensory Systems</subject>
<subj-group>
<subject>Visual System</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Behavioral Neuroscience</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Engineering</subject>
<subj-group>
<subject>Bioengineering</subject>
<subj-group>
<subject>Biomedical Engineering</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Medicine</subject>
<subj-group>
<subject>Mental Health</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group>
<subject>Otorhinolaryngology</subject>
<subj-group>
<subject>Otology</subject>
<subj-group>
<subject>Vertigo</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Social and Behavioral Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Direction Specific Biases in Human Visual and Vestibular Heading Perception</article-title>
<alt-title alt-title-type="running-head">Biases in Human Heading Perception</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Crane</surname>
<given-names>Benjamin T.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Department of Otolaryngology, University of Rochester, Rochester, New York, United States of America</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Department of Neurobiology and Anatomy, University of Rochester, Rochester, New York, United States of America</addr-line>
</aff>
<aff id="aff3">
<label>3</label>
<addr-line>Department of Bioengineering, University of Rochester, Rochester, New York, United States of America</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>van Beers</surname>
<given-names>Robert J.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>VU University Amsterdam, The Netherlands</addr-line>
</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>craneb@gmail.com</email>
</corresp>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The author has declared that no competing interests exist.</p>
</fn>
<fn fn-type="con">
<p>Conceived and designed the experiments: BTC. Performed the experiments: BTC. Analyzed the data: BTC. Contributed reagents/materials/analysis tools: BTC. Wrote the paper: BTC.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<pub-date pub-type="epub">
<day>7</day>
<month>12</month>
<year>2012</year>
</pub-date>
<volume>7</volume>
<issue>12</issue>
<elocation-id>e51383</elocation-id>
<history>
<date date-type="received">
<day>28</day>
<month>9</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>5</day>
<month>11</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-year>2012</copyright-year>
<copyright-holder>Benjamin Thomas Crane</copyright-holder>
<license>
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>Heading direction is determined from visual and vestibular cues. Both sensory modalities have been shown to have better direction discrimination for headings near straight ahead. Previous studies of visual heading estimation have not used the full range of stimuli, and vestibular heading estimation has not previously been reported. The current experiments measure human heading estimation in the horizontal plane to vestibular, visual, and spoken stimuli. The vestibular and visual tasks involved 16 cm of platform or visual motion. The spoken stimulus was a voice command speaking a heading angle. All conditions demonstrated direction dependent biases in perceived headings such that biases increased with headings further from the fore-aft axis. The bias was larger with the visual stimulus when compared with the vestibular stimulus in all 10 subjects. For the visual and vestibular tasks precision was best for headings near fore-aft. The spoken headings had the least bias, and the variation in precision was less dependent on direction. In a separate experiment when headings were limited to ±45°, the biases were much less, demonstrating the range of headings influences perception. There was a strong and highly significant correlation between the bias curves for visual and spoken stimuli in every subject. The correlation between visual-vestibular and vestibular-spoken biases were weaker but remained significant. The observed biases in both visual and vestibular heading perception qualitatively resembled predictions of a recent population vector decoder model (Gu et al., 2010) based on the known distribution of neuronal sensitivities.</p>
</abstract>
<funding-group>
<funding-statement>The research was supported by NIDCD K23 DC011298 and a career scientist award from the Triological Society. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<page-count count="15"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Visual heading perception is influenced by optic flow
<xref ref-type="bibr" rid="pone.0051383-Gibson1">[1]</xref>
<xref ref-type="bibr" rid="pone.0051383-Britten1">[4]</xref>
. It has classically been studied in one of two ways: either as discrimination in which a heading is compared with a reference position using a forced choice task (i.e. is the stimulus right or left relative to straight ahead or a reference stimulus)
<xref ref-type="bibr" rid="pone.0051383-Warren2">[5]</xref>
<xref ref-type="bibr" rid="pone.0051383-Li1">[9]</xref>
or estimation in which the subject directly reports the perceived heading using a pointing device such as a cursor
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
or physical pointer
<xref ref-type="bibr" rid="pone.0051383-Telford1">[12]</xref>
. Both of these methods have focused on headings near straight ahead.</p>
<p>In prior visual heading estimation tasks which studied pure translation the perception was usually within a few degrees of the actual heading with some studies reporting underestimation (the perceived heading is closer to the fore-aft axis than actual) which was usually small
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Johnston1">[13]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Llewellyn1">[14]</xref>
, and others reporting slight
<xref ref-type="bibr" rid="pone.0051383-Warren4">[15]</xref>
or large
<xref ref-type="bibr" rid="pone.0051383-Telford1">[12]</xref>
overestimation in which the perceived heading is further from the fore-aft axis than actual. For these estimation tasks, the horizontal range tested was limited: in one instance to as large as 90°
<xref ref-type="bibr" rid="pone.0051383-Warren4">[15]</xref>
but usually much less at ±25°
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
, ±20°
<xref ref-type="bibr" rid="pone.0051383-Telford1">[12]</xref>
±15°
<xref ref-type="bibr" rid="pone.0051383-Warren4">[15]</xref>
or less
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Llewellyn1">[14]</xref>
. To the knowledge of the author, a visual heading estimation task has not previously been published using a full range of headings in the horizontal plane.</p>
<p>The range of headings included has important implications for perception. When, the visual focus of expansion (FOE) is within the field of view (FOV), its location gives the heading direction
<xref ref-type="bibr" rid="pone.0051383-Li1">[9]</xref>
. When the FOE is outside the FOV heading can be determined using triangulation of vectors determined from motion of fiducial points and is potentially less accurate
<xref ref-type="bibr" rid="pone.0051383-Koenderink1">[16]</xref>
although experiments which have examined this question have found the accuracy to be similar when the FOE is in or outside the FOV
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Warren4">[15]</xref>
. Although theoretically the size of the FOV influences heading accuracy
<xref ref-type="bibr" rid="pone.0051383-Koenderink1">[16]</xref>
, other factors may be predominant as headings estimated from a 112°FOV and similar accuracy to estimates made through a 5 or 10° aperture
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
.</p>
<p>Another potentially more important issue is the range of heading stimuli tested and the range of responses permitted. Prior to viewing each stimulus subjects likely had an internal model of the range of possible stimuli and inferred that stimuli in this range would be most likely. Even if this range were not explicitly given to the subject, it might be inferred from the range of stimuli experienced in earlier trials or the range of responses permitted. In previous studies of visual heading estimation both the range of responses and stimuli were limited
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
<xref ref-type="bibr" rid="pone.0051383-Telford1">[12]</xref>
. Bayesian theory provides a quantitative basis for this prior estimate of stimulus distribution that influenced their subsequent perception
<xref ref-type="bibr" rid="pone.0051383-Ernst1">[17]</xref>
<xref ref-type="bibr" rid="pone.0051383-Girshick1">[19]</xref>
. Thus if subjects expects headings within a fixed range, they would be unlikely to perceive headings outside this range
<xref ref-type="bibr" rid="pone.0051383-Stocker1">[20]</xref>
. The current study avoids the issue of having the stimuli limited to a range of angles by presenting stimuli which are uniformly distributed about a full 360° in the horizontal plane such that the FOE is not present on the screen for 74% of stimuli. Responses are similarly not limited. With this protocol, subjects are less likely to limit their responses based on a prior expectation that the range of stimuli is limited.</p>
<p>In most real world experiences visual and vestibular heading cues are available. Study of vestibular heading has been more limited than study of visual heading. Most of these have studied discrimination relative to a reference position in humans
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-MacNeilage1">[21]</xref>
and non-human primates
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Fetsch1">[22]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Gu2">[23]</xref>
. Unlike estimation methods, discrimination does not reveal internal biases since two stimuli compared. There have also been studies of human heading estimation in darkness
<xref ref-type="bibr" rid="pone.0051383-Ivanenko1">[24]</xref>
and with and without visual cues
<xref ref-type="bibr" rid="pone.0051383-Telford1">[12]</xref>
. These previous studies of vestibular heading perception in darkness suggested perceived headings are slightly underestimated relative to fore-aft. However, like the studies on visual heading estimation, these vestibular studies limited the potential stimuli to a small number of potential headings in a narrow horizontal range.</p>
<p>The prior work on heading estimation does not take in to account any potential biases in spatial cognition, haptic, or motor influences that may be independent of sensory stimulation. It is possible that some of the bias in heading perception may be due to an internal representation of space that is itself biased. Also, the method used to report the perceived heading will have a haptic and motor component. In the current study, efforts to control for this are made by having subjects orient a pointer towards a verbally spoken angular heading.</p>
<p>Visual and vestibular heading estimation may not be independent as both are represented in medial superior temporal area (MSTd) of the cortex
<xref ref-type="bibr" rid="pone.0051383-Britten2">[25]</xref>
<xref ref-type="bibr" rid="pone.0051383-Duffy1">[27]</xref>
which is likely a key area in determining heading perception
<xref ref-type="bibr" rid="pone.0051383-Britten1">[4]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Fetsch2">[28]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Gu3">[29]</xref>
. Recent modeling of this area using a population vector decoder (PVD) as well as a maximum likelihood (ML) estimate model been used to explain the increased precision in determining headings near straight ahead
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
. The PVD model is relatively simple in that each neuron essentially weighs in on its preferred direction based on the magnitude of its response
<xref ref-type="bibr" rid="pone.0051383-Georgopoulos1">[30]</xref>
. Such a PVD model has previously been used to explain visual pursuit based on MSTd activity
<xref ref-type="bibr" rid="pone.0051383-Page1">[31]</xref>
. However the PVD model has limitations in that estimates may be biased toward directions with more neurons responding in that preferred direction
<xref ref-type="bibr" rid="pone.0051383-Sanger1">[32]</xref>
. To correct for this the ML model applies probability theory to a population of neurons to find the maximum likelihood for a set of parameters
<xref ref-type="bibr" rid="pone.0051383-Sanger1">[32]</xref>
. Although the ML method is useful in analysis of experimental data it is not intended as a biologically plausible neuronal computation algorithm
<xref ref-type="bibr" rid="pone.0051383-Sanger1">[32]</xref>
. The PVD model is relatively simple and thus more biologically plausible. In comparing the PVD and ML models to heading estimation in MSTd, the PVD predicted that both visual and vestibular heading would be overestimated by large amounts at eccentric headings, while the ML model predicted headings without a direction specific bias
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
. The data used by Gu et al. to develop these models tested heading discrimination but not heading estimation and thus could not differentiate between the predictions of these competing models. The current study looks for the possible biases predicted by the PVD model by measuring visual and vestibular heading estimates over the full range of horizontal. It was found that perceived heading relative to straight ahead is overestimated with both visual and vestibular stimuli similar to what the PVD model predicts.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2a">
<title>Ethics Statement</title>
<p>The research was conducted according to the principles expressed in the Declaration of Helsinki. Written informed consent was obtained from all participants. The protocol and written consent form were approved by the University of Rochester Research Science Review Board (RSRB).</p>
</sec>
<sec id="s2b">
<title>Equipment</title>
<p>Motion stimuli were delivered using a 6-degree-of-freedom motion platform (Moog, East Aurora, NY, model 6DOF2000E) similar to that used in other laboratories for human motion perception studies
<xref ref-type="bibr" rid="pone.0051383-MacNeilage1">[21]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Grabherr1">[33]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Fetsch3">[34]</xref>
and previously described in the current laboratory
<xref ref-type="bibr" rid="pone.0051383-Roditi1">[35]</xref>
. Subjects were seated in a padded racing seat (Corbeau, Sandy UT, model FX-1) mounted on the platform. A four-point racing style harness held the body in place. The head was held in place using an American football helmet (Riddell, Eyria, OH) with the facemask removed to improve visibility. Helmets were available in 6 sizes to allow each subject to be fit appropriately. The helmet had an inflatable liner to insure a sung fit. Once the subject was seated the helmet was firmly pushed back against hard rubber pads and a strap was used hold the helmet against the pad and prevent decoupling. A second rigid point of attachment on the side of the helmet further prevented any decoupling. The head was held in position so that the body midline and external auditory canals were directly over the center of the platform.</p>
<p>During both visual and vestibular stimuli, an audible white noise was reproduced from two platform-mounted speakers on either side of the subject as previously described
<xref ref-type="bibr" rid="pone.0051383-Roditi1">[35]</xref>
. The intensity of the masking noise varied with time as a half-sine wave so that the peak masking noise occurred at the same time the peak velocity was reached. This created a masking noise similar to the noise made by the platform. Although no masking noise was needed for the visual condition it was still used for consistency. For clarity, masking noise was not used in trials when the heading was spoken.</p>
<p>Responses were collected using a two-button control box with a dial in the middle that could be freely rotated in the horizontal plane without any discontinuity points. The box with the dial was mounted 20 cm anterior to the subject just above waist level below the viewing screen. The dial was not visible during the experiment and orientation was by feel. The dial was connected to a 14 bit rotary encoder (Contelec, model VertX1332, Biel Switzerland) which was calibrated to a <0.1° angular resolution.</p>
<p>The two buttons at either end had the same function. After an audible tone indicated that the next stimulus was ready a button could be pressed to deliver the stimulus. After the stimulus was delivered a series of two tones indicated the perceived heading direction should be selected. After the heading was selected, one of the buttons was pressed again so the subject could signal that they had finished their selection. The dial remained in the position the subject left it for the next stimulus presentation. Although the heading direction and residual position from the previous trial may have influenced the response to the subsequent trial, any effect likely evened out in the aggregate data as the stimuli were given in a random order which was different for each block of trials. There were no explicit orientation markers on the dial (such as a divot at the zero position) although the dial was mounted in a rectangular box so the edges of the box might have served as reference positions.</p>
</sec>
<sec id="s2c">
<title>Stimulus</title>
<p>The visual and vestibular stimuli consisted of a 2s (0.5 Hz) sine wave in acceleration. The stimulus can be described in the acceleration (
<italic>a</italic>
(
<italic>t</italic>
)), velocity (
<italic>v</italic>
(
<italic>t</italic>
)), or position (
<italic>d</italic>
(
<italic>t</italic>
)) domains given the frequency in Hz (
<italic>F</italic>
) and total displacement (
<italic>D</italic>
) (
<xref ref-type="disp-formula" rid="pone.0051383.e001">equations 1</xref>
<xref ref-type="disp-formula" rid="pone.0051383.e003">3</xref>
). These motion profiles were chosen because they contain no discontinuities in acceleration, velocity, or position, and they have previously been used for threshold determination
<xref ref-type="bibr" rid="pone.0051383-Grabherr1">[33]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Roditi1">[35]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Benson1">[36]</xref>
.
<disp-formula id="pone.0051383.e001">
<graphic xlink:href="pone.0051383.e001"></graphic>
<label>(1)</label>
</disp-formula>
<disp-formula id="pone.0051383.e002">
<graphic xlink:href="pone.0051383.e002"></graphic>
<label>(2)</label>
</disp-formula>
<disp-formula id="pone.0051383.e003">
<graphic xlink:href="pone.0051383.e003"></graphic>
<label>(3)</label>
</disp-formula>
</p>
<p>Visual stimuli were presented on a color LCD screen measuring 115.6 by 64.8 cm with a resolution of 1920×1080 pixels (Samsung model LN52B75OU1FXZA). The subject was seated 50 cm from the screen that filled a 98° horizontal field of view. A fixation point consisted of a 2×2 cm midline cross at eye level. The visual stimulus consisted of a star field which simulated movement of the observer through a random-dot cloud. Each star consisted of a triangle 0.5 cm in height and width at the plane of the screen, adjusted appropriately for distance. The star density was 0.01 per cubic cm. The depth of the field was 130 cm. Visual coherence was fixed at 100%. Disparity was provided using red-green anaglyph glasses made with Kodak (Rochester, NY) Wratten filters #29 (dark red) and #61 (deep green). The colors were adjusted such that the intensities of the two were similar when viewed through the respective filters and rejection ratio was better than ten fold.</p>
<p>Five stimulus types were tested: Motion in darkness with nothing visible and no fixation point (NF), this type of trial had a displacement of 16 cm with a peak velocity of 16 cm/s and peak acceleration of 25 cm/s/s. A similar trial was done with a small fixation point visible on a video screen (FP). A set of trials was done in complete darkness with the movement designed to be sub-threshold (ST) with a displacement of 1 cm, peak velocity of 1 cm/s, and acceleration of 1.6 cm/s/s. A visual (V) stimulus displayed the pattern of motion expect for this movement through a star field. The final test type was a spoken (S) stimulus in which a computer generated voice would speak the desired heading relative to straight ahead (i.e., “45 degrees right”, or “135 degrees left.”) which was done in darkness with no platform motion. Each block of trials consisted of stimulus presentations of a single type, to keep trial blocks at a reasonable length and maintain alertness. The order of stimulus blocks was varied between subjects.</p>
<p>Each block consisted of 72 stimulus presentations: The stimuli were headings at 5° increments such that all 360° was equally represented. The headings were delivered in random order throughout the trial block with each heading delivered once. Test types NF, FP, V, and S were each repeated twice for each subject. After examining the data it was felt that consistent results were found after two repetitions and additional repetitions were not needed. Test ST was only done once. To maintain subject alertness, testing was broken up into at least 2 sessions on different days. Subjects were not required to complete a certain number of trial blocks in each session but 4–5 trial blocks were typical.</p>
</sec>
<sec id="s2d">
<title>Experimental Procedure</title>
<p>Subjects were instructed that each stimulus would move or simulate motion along a vector in the horizontal plane, or in case of the spoken (S) test a spoken heading would be heard. Prior to testing subjects were shown how to orient the dial. Occasionally, subjects were seen to make systematic errors early in a session such as identifying the direction the star field was moving rather than their direction through the star field in the visual system. These types of errors were rare and identified in the first few trials. When this occurred the subject was given further instruction and the trial block was restarted. Prior to the spoken condition subjects were given a brief orientation to the cardinal axes (i.e. 0° is forward, 90° is right, etc.) It was made clear to subjects that the spoken angles were relative to straight ahead, which was defined as 0°. In visual and vestibular conditions an audible beep marked the end of the stimulus alerting the subject to orient a dial towards the perceived direction. Subjects were encouraged to guess if uncertain. The experiment was practiced a few times in the light to ensure comprehension of the task prior to data collection in darkness. Two subjects (#3 and #9) were familiar with the design of the experiment, the other subjects had participated in previous experiments in the lab using the motion platform but were otherwise naïve to the design and purpose of the experiment.</p>
<p>Prior to stimulus delivery the subject heard a 500 Hz, 0.125s single tone to signal that the next stimulus was ready and the start button could be pressed. The stimulus was delivered immediately after the subject pressed the start button. After the stimulus was delivered, two 0.125 s tones were played in rapid succession to indicate the stimulus had been delivered and the perceived direction could be entered. If no response was entered a “timeout sound” was played (a low frequency buzz). The time out occurred at 3 s for motion stimuli and at 10 s for spoken stimuli, a task which tended to take subjects longer. After either a response or timeout, the platform returned to the center starting position using a motion profile similar to the stimulus but taking 2.5 s.</p>
<p>The experiment was repeated using stimuli that were limited to 5° increments in the range of ±45°. These experiments were done after the experiments that included a 360° range of headings, in a subset of six subjects (1, 2, 3, 7, 8, and 9). Conditions tested included visual (V), vestibular without no fixation (NF), and spoken. To make this control experiment as similar as possible to prior experiments in which the responses were limited by the screen size the responses were initially mechanically limited to a ±45° range, the size of the screen in a prior study
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
. Other than the mechanical limit on responses no explicit feedback was given. This was also felt to be most consistent with previous studies
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
in which the range of headings which could be reported was also limited. The conditions were repeated without limits on the responses, but with only stimuli in the ±45° range tested. As with the other experiments the headings were delivered in random order. In V and NF each heading was presented three times, and in the spoken condition it was presented twice.</p>
</sec>
<sec id="s2e">
<title>Subjects</title>
<p>A total of 10 subjects (3 female, 1 left handed) participated in the experiment. Ages ranged from 21 to 66 (37±16, mean±standard deviation). All 9 blocks of trials using a 360° range of stimuli were usually completed in two sessions lasting no more than 90 minutes each with breaks between blocks of trials. In the subset of 5 subjects, in whom additional experiments used a ±45° range of stimuli, this testing was in a single session on a separate day. The order of the blocks was randomized within each session, except trials of type NF or FP were completed first. This was done so that if the subject did not understand the instructions it would be obvious early in the session.</p>
<p>Subjects were screened prior to participation for normal peripheral vestibular function and hearing as previously described
<xref ref-type="bibr" rid="pone.0051383-Roditi2">[37]</xref>
.</p>
</sec>
<sec id="s2f">
<title>Analysis</title>
<p>The dial setting was compared with the actual heading for each trial to calculate an error (
<xref ref-type="fig" rid="pone-0051383-g001">Figs. 1</xref>
and
<xref ref-type="fig" rid="pone-0051383-g002">2</xref>
). This direct method allowed the error to be calculated across subjects. Due to the large number of headings (72) and the limited number of repetitions of each heading simply taking the average at each heading was susceptible to noise when applied to the data of an individual subject. The averaging method also does not provide a reliable measure of precision (reproducibility) of responses since there were small numbers of trials at each heading. These issues were addressed using a psychometric technique: Each of the 72 possible stimulus headings was used as a reference heading. The responses to all the headings within ±90° of this heading were examined to determine if the response heading were right or left of the reference heading. A cumulative distribution function could then be fit to these responses (
<xref ref-type="fig" rid="pone-0051383-g003">Fig. 3</xref>
) using the technique previously described
<xref ref-type="bibr" rid="pone.0051383-Wichmann1">[38]</xref>
. Each fit was reiterated 100 times using resampled responses to permit determination of confidence intervals
<xref ref-type="bibr" rid="pone.0051383-Wichmann1">[38]</xref>
. A lapse rate of 0 to 0.05 was fit to the responses. Using this method, for each reference heading the mean of the psychometric function or point of subjective equality (PSE) represented the heading at which subjects were equally likely to perceive a heading left or right of the reference heading. The width of the psychometric function (sigma) represented a measure of the precision or reproducibility of the responses.</p>
<fig id="pone-0051383-g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Error in perceived heading as a function of stimulus heading.</title>
<p>Ideal performance would be represented by a horizontal line at zero. Each panel represents a stimulus type. Combined data is shown for all subjects. Individual responses are shown as gray circles. The median of the individual responses is shown as a dark solid line. Thin lines represent the 25
<sup>th</sup>
and 75
<sup>th</sup>
percentiles. The angle zero represents straight ahead. Panel A: 16 cm displacement in darkness, no fixation point was present. Dashed line represents the theoretical performance previously predicted by a population vector decoder (PVD) model
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
. 2% of data points are outside the range shown. Panel B: 16 cm displacement with a fixation point visible at eye level, 2% of data points are outside the range shown. Panel C: 1 cm displacement in darkness, no fixation point was visible. Note that range of errors shown are ±150 degrees, 12% of responses were outside this range. Panel D: Visual motion through a star field with binocular disparity, the visual motion stimulated a 16 cm displacement but no platform motion occurred. 5% of data points are outside the range shown. Panel E: Spoken commands were given in darkness and the subject oriented the dial to the requested heading. 4% of data points are outside the plotted range.</p>
</caption>
<graphic xlink:href="pone.0051383.g001"></graphic>
</fig>
<fig id="pone-0051383-g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Effect of limiting headings to ±45°.</title>
<p>In a subset of 6 subjects (#1,2,3, 7, 8, and 9) the task was repeated with the possible headings limited to ±45°. This task was initially done with the responses mechanically limited to ±45° (squares) and subsequently in the same session with the full range of responses available (triangles). Each data point represents the median response with error bars representing 25
<sup>th</sup>
to 75
<sup>th</sup>
percentiles. The responses with a full range of possible headings and responses are shown as circles, these are the same responses plotted in Fig. 1 but for the subset of subjects who also completed the limited heading task. Squares marked with ‘X’ indicate the perceived headings during the 45° response limited condition were significantly different than the perceived headings when the full range of headings was delivered. Triangles marked with dots indicate a significant difference (p<0.01) between conditions where the responses were limited to ±45° and the condition where only the headings were limited to ±45°, this was uncommon and limited to the visual stimulus for headings of ±40° and ±45°.</p>
</caption>
<graphic xlink:href="pone.0051383.g002"></graphic>
</fig>
<fig id="pone-0051383-g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g003</object-id>
<label>Figure 3</label>
<caption>
<title>The psychometric method of determining perceived heading for two sample headings.</title>
<p>The plot contains data from all 10 subjects each of whom completed 2 blocks of trials with the visual stimulus. Thus, each data point represents 20 stimulus presentations. For each plot a reference heading was chosen and stimulus headings at ±90° relative to the reference heading were considered. The ordinate represents the fraction of responses to the right of the reference heading. With ideal performance, all the headings to the left of the reference stimulus heading would be 0 and all the headings to the right of the reference would be one. The solid line represents the cumulative distribution function which best approximates the data. Panel A: Reference heading is 45° right. The cumulative distribution function predicts that the point of subjective equality (PSE) at which the stimulus is equally likely to be perceived left or right of 45° would be 28.4°. Panel B: The reference heading is 90° right. In this example a perceived heading of 90° is most likely with a stimulus of 91.1°. Although the accuracy of the perceived heading is better, the precision (sigma) is worse when compared with 45° in panel A.</p>
</caption>
<graphic xlink:href="pone.0051383.g003"></graphic>
</fig>
<p>Statistical significance of other types of responses was determined using ANOVA in Prism (GraphPad, La Jolla, CA) with the threshold for significance set at p<0.01. Correlations between continuous variables were analyzed using a two-tailed Spearman’s rank order correlation coefficient or when more than two continuous variables were compared partial correlation coefficients.</p>
</sec>
</sec>
<sec id="s3">
<title>Results</title>
<p>The data combined across subjects are reported first so the general trends can be described before exploring individual variation. The perceived heading had a direction dependent bias for all test conditions except ST (
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1</xref>
). There was a tendency to overestimate the lateral aspect of movement: Headings to the right of the fore-aft axis were estimated further to the right and those to the left were estimated further to the left. The effect was most pronounced with conditions NF, FP, and V in which 16 cm displacement was used (
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1A,B, and D</xref>
). Thus, for headings between straight ahead (0°) and pure rightward movement (90°) the heading was estimated further to the right than the actual heading (more positive). For headings with a forward and leftward component (0 to −90°) the heading was estimated as further to the left than actual (more negative). For movements with a significant backward component this trend was similar such that headings were perceived as more lateral than they actually were. The spoken (S) stimulus also demonstrated a direction specific bias in perceived heading although it was less than seen for the NF, FP, and V conditions (
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1E</xref>
). The trend in perceived heading errors was similar to that predicted from a PVD model of MSTd in monkeys
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
with the model predictions shown as a dashed line in
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1A and B</xref>
for vestibular motion and 1D for visual motion. The ML model predicts no bias, so this model would represented as a flat line at zero.</p>
<p>The sub-threshold (ST) condition (
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1C</xref>
) produced responses that were essentially all noise. This data was collected to investigate the possibility that subjects might guess certain headings (i.e. the cardinal directions) more frequently but this was not the case. Although further analysis was performed on this condition because the data was essentially all noise, no further analysis of ST is presented.</p>
<p>Care was taken to see if presence of a visible fixation point (FP) during platform motion had any effect when compared with motion in darkness with no fixation (NF). These two conditions produced qualitatively similar results when examined across subjects (
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1A and 1B</xref>
). In the combined data the standard deviation of the direction specific bias was not different between the two conditions (paired T-test, p = 0.12). There was also a tight correlation between direction specific heading errors in two conditions (R = 0.93, slope 0.95, p<<0.001). Because the two conditions produced virtually identical results, further analysis of the FP and NF conditions will be reported for the combined data set referred to as the ‘vestibular’ condition.</p>
<p>Analysis was performed to see if heading estimation changed with subsequent exposure to the task. This was done by combining the data across all 10 subjects for the first trial block and comparing it with the data for the final trial block. For the visual headings the bias on the first block of trials was highly correlated with the bias found on the second (R = 0.98, slope 0.91) and a paired-test demonstrated no different between the biases (p = 0.35) or sigma (p = 0.13). With the vestibular conditions the FP and NF trials were combined so that the first trial block of vestibular heading perception could be compared with the forth trial block. Here the absolute amount of bias was slightly greater on the first attempt at 7.2° vs. the forth attempt at 5.4° but this difference was not significant (paired T-test, p = 0.07). The precision (sigma) was similar between the first and forth attempt at 17.4° vs 17.6° (p = 0.84). For the S condition the bias in the first block was tightly correlated with the bias on the second (R = 0.93, slope 0.98) with no significant difference in bias between the two blocks (T-test, p = 0.95) or sigma (p = 0.32). Thus previous experience to the task did not have an appreciable influence on subsequent performance.</p>
<p>Heading perception was also measured in a separate block of trials over the limited range of ±45° in the NF, V, and S conditions (
<xref ref-type="fig" rid="pone-0051383-g002">Fig. 2</xref>
). Limiting the range of stimuli had the effect of decreasing direction specific biases regardless of the range of responses permitted. The decrease in bias was most pronounced and significant at the more eccentric headings where the bias was larger (
<xref ref-type="fig" rid="pone-0051383-g002">Fig. 2</xref>
). When compared with heading perception when a full range of headings was used the direction specific bias was significantly smaller at eccentric headings for the NF and V conditions.</p>
<p>A psychometric technique was applied to determine the precision of responses allowing them to be more directly compared with prior experiments on heading discrimination. This technique also decreased the noise in the heading estimates by using a 180° range of headings to determine the bias for each heading direction, but could only be used in conditions where the full range of headings was tested. Applying the psychometric technique described in methods to the combined responses demonstrated that classification of responses based on relative directions was closely approximated by a cumulative distribution function (
<xref ref-type="fig" rid="pone-0051383-g003">Fig. 3</xref>
). Results of these psychometric fits (
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4</xref>
) demonstrated the mean error between perceived and stimulus headings was similar to simply averaging the response (
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1</xref>
) but with less noise and also allowed determination of sigma as a measure of precision. The amount of direction specific error in heading perception could be quantified by taking the difference between the reference and the PSE across the range of headings (
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4A–C</xref>
).</p>
<fig id="pone-0051383-g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Direction specific bias (panels A–C) and precision (sigma, panels D–F) for perceived heading across subjects.</title>
<p>The results are qualitatively similar to those in
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1</xref>
, but calculated using the method in
<xref ref-type="fig" rid="pone-0051383-g003">Fig. 3</xref>
. The previously predicted performance based on a PVD model
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
is shown as a red dashed line for the vestibular and visual conditions. The thick line represents the mean value, and the two thin lines represent the 95% CI based on 100 fits with resampled data in each iteration. Panel A: Vestibular motion. Panel B: Visual motion (optic flow). The data points shown calculated in
<xref ref-type="fig" rid="pone-0051383-g003">Fig. 3A&B</xref>
are marked with arrows. Panel C: The subject orients a dial based on a spoken heading. Panel D: Precision for the vestibular condition. Panel E: Precision for the visual stimulus, points calculated in
<xref ref-type="fig" rid="pone-0051383-g003">Fig. 3A&B</xref>
are marked. Panel F: Precision for the spoken condition.</p>
</caption>
<graphic xlink:href="pone.0051383.g004"></graphic>
</fig>
<p>The precision of perceived headings was determined using the width (sigma) of the psychometric function that best fit the responses (
<xref ref-type="fig" rid="pone-0051383-g003">Fig. 3</xref>
). This precision was best for the spoken condition, followed by the visual and vestibular conditions (
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4D–F</xref>
). For each condition the precision was best for headings close to straight ahead (0°) and straight backwards (180°). For the vestibular and visual conditions the precision was considerably worse for more lateral headings (i.e. near ±90°) consistent with previous results using a discrimination task
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
.</p>
<p>To further evaluate the observed heading errors relative to a model, a linear regression was performed between the observed headings and the model predictions. There was a strong positive correlation between the model prediction for heading and the perceived heading for both vestibular and visual conditions (R
<sup>2</sup>
>0.98 with slope near unity for both). When just the heading error was considered (
<xref ref-type="fig" rid="pone-0051383-g005">Fig. 5A&B</xref>
) the correlation was less strong with R
<sup>2</sup>
 = 0.60 (
<xref ref-type="fig" rid="pone-0051383-g005">Fig. 5A</xref>
, p = 0.0001) for vestibular headings and R
<sup>2</sup>
 = 0.70 (
<xref ref-type="fig" rid="pone-0051383-g005">Fig. 5B</xref>
, p<0.0001) for the visual condition. The slope of the linear regression was 0.34 for vestibular and 0.55 for vision indicating the observed biases were generally smaller than those predicted by the model.</p>
<fig id="pone-0051383-g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Correlations between the PVD model predictions
<xref ref-type="bibr" rid="pone.0051383-Gu3">[
<bold>29</bold>
]</xref>
and experimental observations of heading perception.</title>
<p>The perceived headings were calculated using the method in
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4</xref>
. The best-fit linear regression is shown as a solid line on each panel. The gray dashed line represents unity. When the heading errors are compared there was a correlation for the vestibular (panel A) and visual (panel B) conditions.</p>
</caption>
<graphic xlink:href="pone.0051383.g005"></graphic>
</fig>
<p>Correlations between the direction specific biases were found in the visual, vestibular, and spoken conditions. These correlations using the data pooled across all ten subjects are shown in
<xref ref-type="fig" rid="pone-0051383-g006">Fig. 6</xref>
. The strongest correlation was between the visual and spoken condition (
<xref ref-type="fig" rid="pone-0051383-g006">Fig. 6A, R</xref>
 = 0.91, p<<0.001) with the direction specific biases for the spoken condition being half that of visual headings. This difference in magnitude was evident from the slope of the linear regression (m = 0.49) and the standard deviations of the bias (13.6 for visual vs. 7.1 for vestibular). Thus the direction specific bias curves for the visual and spoken conditions had a similar shape although the bias was smaller in the spoken condition. Although correlations between the visual-vestibular and vestibular-spoken conditions remained significant (
<xref ref-type="fig" rid="pone-0051383-g006">Fig. 6B and 6C</xref>
) they were weaker. This poor correlation was due to the vestibular condition tending to have a direction specific bias that was large for headings with a forward component but much smaller for headings with a backward component.</p>
<fig id="pone-0051383-g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Correlations in direction specific bias between subjects for 3 stimulus conditions.</title>
<p>Data was generated from responses combined across subjects. There is one data point for each of the 72 stimulus headings. The same error could occur at multiple headings so any given error in one sensory modality could correspond to multiple errors in another. Slope (m), Correlation coefficient (R), and p-value are given for each condition. A solid line represents the best-fit linear regression. The gray dashed line represents unity. Panel A: Visual-spoken. Panel B: Visual-vestibular. Panel C: Vestibular-spoken.</p>
</caption>
<graphic xlink:href="pone.0051383.g006"></graphic>
</fig>
<p>Further analysis will focus on individual subjects to determine to what degree these trends were observed. The psychometric fitting technique was robust enough to apply to individual subjects (
<xref ref-type="fig" rid="pone-0051383-g007">Fig. 7</xref>
) permitting individual direction specific bias curves to be determined (
<xref ref-type="fig" rid="pone-0051383-g008">Fig. 8</xref>
). The trend of overestimating the lateral heading component was often similar to that seen in the combined data with biases in the direction of heading but with significant variation in the magnitude of the bias between subjects (i.e.
<xref ref-type="fig" rid="pone-0051383-g008">Fig. 8A</xref>
vs. 8D). The visual and spoken heading curves often had a similar shape for each subject. For instance, in subject #10 there was a trend towards a larger bias in heading perception for visual headings with a backward and left component (−90 to −180°,
<xref ref-type="fig" rid="pone-0051383-g008">Fig. 8E</xref>
). In such cases there was often a hint of a similar direction specific bias in the spoken condition (
<xref ref-type="fig" rid="pone-0051383-g008">Fig. 8F</xref>
), with fewer consistent similarities between the visual and vestibular conditions (
<xref ref-type="fig" rid="pone-0051383-g008">Fig. 8A and B</xref>
vs. 8D and E).</p>
<fig id="pone-0051383-g007" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g007</object-id>
<label>Figure 7</label>
<caption>
<title>The psychometric method of determining perceived heading.</title>
<p>This example is for the visual stimulus in subject #4. Each point represents 2 stimulus presentations. The figure is analogous to
<xref ref-type="fig" rid="pone-0051383-g003">Fig. 3</xref>
except data from a single subject is shown. Panel A: The reference heading is 45° right. Panel B: The reference heading is 90° right.</p>
</caption>
<graphic xlink:href="pone.0051383.g007"></graphic>
</fig>
<fig id="pone-0051383-g008" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g008</object-id>
<label>Figure 8</label>
<caption>
<title>Direction specific bias for two subjects (#9 and #10) for three stimulus conditions.</title>
<p>These plots are analogous to
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4A–C</xref>
, but are for specific subjects rather than the whole population. The thick line represents the mean value, and the two thin lines represent the 95% CI based on 100 fits with resampling of the data with each iteration. Panels A and D: Vestibular motion. Panels B and E: Visual motion (optic flow). Panels C and F: The subject orients a dial based on a spoken heading.</p>
</caption>
<graphic xlink:href="pone.0051383.g008"></graphic>
</fig>
<p>The amount of variation in the direction specific bias was quantified by taking the standard deviation (SD) of the bias across stimulus heading. Values for example subjects are shown (
<xref ref-type="fig" rid="pone-0051383-g008">Fig. 8</xref>
). For every subject the direction specific bias was greater in the visual condition than the vestibular condition (
<xref ref-type="fig" rid="pone-0051383-g009">Fig. 9</xref>
). The visual condition also had a greater direction specific bias when compared with the spoken condition in all but one subject. The magnitude of the variation in direction specific bias in one condition was not correlated with that found in other stimulus conditions (Pearson correlation coefficient, p>0.1 for all), thus subjects with larger biases in the visual condition did not tend to have larger biases in the spoken or vestibular conditions and vice versa. Individual variation in biases amoung subjects was examined by looking at biases for each subject at ±45°, and 0° (
<xref ref-type="fig" rid="pone-0051383-g010">Fig. 10</xref>
). For the vestibular and visual conditions 8/10 subjects had positive biases with a 45° stimulus and negative biases at −45°. With the spoken condition the biases were more variable between subjects but were usually smaller.</p>
<fig id="pone-0051383-g009" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g009</object-id>
<label>Figure 9</label>
<caption>
<title>Standard deviation of direction specific biases by subject for the visual, vestibular, and spoken heading estimation conditions.</title>
</caption>
<graphic xlink:href="pone.0051383.g009"></graphic>
</fig>
<fig id="pone-0051383-g010" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g010</object-id>
<label>Figure 10</label>
<caption>
<title>Biases for each subject at the reference headings of −45° (upward pointing filled triangle), 0° (gray circle), and 45° (downward pointing open triangle).</title>
<p>These biases were calculated using the psychometric method shown in
<xref ref-type="fig" rid="pone-0051383-g007">Fig. 7</xref>
. Error bars represent 95% confidence intervals. Panel A: Vestibular headings. Panel B: Visual headings. Panel C: Spoken headings.</p>
</caption>
<graphic xlink:href="pone.0051383.g010"></graphic>
</fig>
<p>Even though the magnitude of the biases was not correlated, potential correlations in shape of the bias curves was explored by determining the slope and correlation coefficients between each of the 3 test conditions (
<xref ref-type="fig" rid="pone-0051383-g011">Fig. 11</xref>
). Similar to that seen in the combined data, there was a highly significant correlation between the direction specific biases in the visual and spoken condition in every subject (
<xref ref-type="fig" rid="pone-0051383-g012">Fig. 12</xref>
). The slope of the correlation between visual and spoken conditions was consistent with the spoken condition having a smaller bias in almost all subjects (
<xref ref-type="fig" rid="pone-0051383-g012">Fig. 12A</xref>
, mean m = 0.42±0.25, range 0.21 to 1.00). The correlation coefficient was highly significant in every individual (
<xref ref-type="fig" rid="pone-0051383-g012">Fig. 12B</xref>
, mean R = 0.65, range 0.40 to 0.81), for these correlations with 72 data points, p<0.001 for |R|>0.38.</p>
<fig id="pone-0051383-g011" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g011</object-id>
<label>Figure 11</label>
<caption>
<title>Correlations for heading dependent bias paired across heading estimation tasks from different sensory modalities.</title>
<p>Data shown are for subject #9 (filled circles) and #10 (open circles). These are the same subjects and data shown in
<xref ref-type="fig" rid="pone-0051383-g006">Figure 6</xref>
, but re-plotted to demonstrate correlation. For the number of data points p<0.001 for R≥0.38 thus all the correlations shown are highly significant except for subject #10 in panel C (p = 0.03). Panel A: Visual-spoken. Panel B: Visual-vestibular. Panel C: Vestibular-spoken.</p>
</caption>
<graphic xlink:href="pone.0051383.g011"></graphic>
</fig>
<fig id="pone-0051383-g012" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0051383.g012</object-id>
<label>Figure 12</label>
<caption>
<title>Correlations in heading bias between the 3 test conditions by subject.</title>
<p>Panel A: Slope of the correlation. Panel B: Partial correlation coefficient (R). For R>0.38 (line in panel B) the correlation is highly significant (p<0.001), and R>0.30 remains significant (p<0.01).</p>
</caption>
<graphic xlink:href="pone.0051383.g012"></graphic>
</fig>
<p>The correlations between visual-vestibular and vestibular-spoken perceived direction dependent heading biases were weaker and less consistent than those seen with visual-spoken condition. For the visual-vestibular condition 9/10 subjects had a positive correlation but the slope was more variable (
<xref ref-type="fig" rid="pone-0051383-g012">Fig. 12A</xref>
, m = 0.49±0.73, range −1.33 to 1.41) and the correlation coefficient lower in all but one subject when compared with visual-spoken (
<xref ref-type="fig" rid="pone-0051383-g012">Fig. 12B</xref>
, mean R = 0.34±0.28). Subject #6 was an outlier with a large inverse correlation between visual and vestibular bias curves. The atypical results in this subject may have been related to difficulty with the task as demonstrated by large variability in his responses. The vestibular-spoken conditions had a positive correlation in only 7/10 subjects (m = 0.29±0.49, range −0.34 to 0.95).</p>
<p>There was no correlation between subject age and either accuracy or precision for any of the stimuli types (Pearson correlation, p>0.05 for each). Also no effects of gender or handedness were seen but the study only included 3 females and 1 left handed individual.</p>
</sec>
<sec id="s4">
<title>Discussion</title>
<p>The current data establish that humans overestimate the lateral component of heading with both visual and vestibular stimuli, and to a lesser extent spoken headings. These findings are unexpected given previous reports of visual heading estimation. Several prior studies have demonstrated that the visual heading is underestimated
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Johnston1">[13]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Llewellyn1">[14]</xref>
, or underestimated in most subjects
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
. The reason these prior studies did not demonstrate the large headings errors may be because the range of headings was limited to at most ±25°. The participants of these studies may have had prior knowledge that the potential headings were limited to this range or determined this after early stimulus presentations and may have influenced their responses accordingly. When a cursor is used to measure the heading perception the range of responses is also limited by the size of the display screen
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Johnston1">[13]</xref>
.</p>
<p>When the headings were limited to ±45° in the current paper (
<xref ref-type="fig" rid="pone-0051383-g002">Fig. 2</xref>
) the biases were found to be much diminished in line with previous papers that also tested a limited range
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Johnston1">[13]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Warren4">[15]</xref>
. For visual and vestibular headings at the extreme of the range a small underestimate was observed, consistent with the small overestimate in other studies
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
. This might be explained by subjects ‘leaving room’ for more extreme stimuli by not choosing the maximal excursion. This strongly implies that limits on the range of headings tested can cause the direction specific biases to be smaller and in some situations even reverse. Limiting the stimuli had similar results with and without limits on the responses (
<xref ref-type="fig" rid="pone-0051383-g002">Fig. 2</xref>
).</p>
<p>One previous study reported large overestimates in perceived headings with a visual stimulus. A heading of 20° (the largest used in that study) was over estimated by about 25° with headings closer to straight ahead underestimated by a smaller amount
<xref ref-type="bibr" rid="pone.0051383-Telford1">[12]</xref>
. It is unclear why Telford and Howard were able to see large heading biases within a limited range of headings when others did not. These authors suggested their finding could be due to their stimulus which included vertical bars rather than typical points. Although not mentioned by these authors, it is also possible that the difference may be due to the method used to collect responses did not include limitations on the perceived direction. The large biases were nearly eliminated when the subject was able to move their head freely and in vestibular heading conditions in the same study. The order the stimuli were tested was not specified in that study but it is possible that subjects ‘discovered’ the range of possible headings was smaller than the initial estimate during the course of the experiment, which could have influenced subsequent perception. The current study demonstrates that the lateral aspect of visual heading is often greatly over estimated. The reason for this finding is likely because neither the range of possible stimuli nor the range of responses was limited by the experimental design, thus the subject’s prior probability distribution was less likely to be limited.</p>
<p>The current study demonstrates that vestibular heading was also overestimated relative to the fore-aft axis. To our knowledge only one previously published study has found large overestimates in heading, which was discussed above with regard to visual heading
<xref ref-type="bibr" rid="pone.0051383-Telford1">[12]</xref>
. However other studies have demonstrate overestimates of about 5°
<xref ref-type="bibr" rid="pone.0051383-Warren4">[15]</xref>
and overestimates in 1 of 8 subjects
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
. Although Telford and Howard found findings similar to the present paper for a visual heading task, they found an opposite result for their vestibular heading task: a slight tendency to underestimate heading angles in darkness. There are several potential reasons for these differences: First, as mentioned previously, the subjects likely had prior knowledge that the range of vestibular headings would be limited to ±20° due to the design of the experiment in which subjects were given eight practice trials with feedback prior to collecting data and limited their responses accordingly. Second, the translation occurred along a fixed track with the subject rotated prior to translation, so there may have also been other cues to their orientation. Third, the stimulus used also included a much longer displacement (600 cm) and longer duration (11 s) than the stimuli used here so it is possible that heading estimation depends on stimulus duration or displacement. Fourth, their stimulus did not deliver pressure to the head during the head free condition and these absent pressure cues may have altered the perception. Finally, the number of subjects in the Telford and Howard study was small (5 subjects), with few stimulus presentations per subject (18) so there was not as much power to eliminate noise from the results. It is not clear why this earlier study found vestibular headings to be slightly underestimated but it may be a combination of factors perhaps including over correction of perceptual errors after the practice trials with feedback.</p>
<p>In this study, the presence of a visual fixation point had no significant effect on heading perception. Although there is a rich literature demonstrating a visual fixation point decreases the threshold of rotation perception
<xref ref-type="bibr" rid="pone.0051383-Roggeveen1">[39]</xref>
<xref ref-type="bibr" rid="pone.0051383-Huang1">[44]</xref>
a fixation point does not influence translation perception
<xref ref-type="bibr" rid="pone.0051383-Benson2">[41]</xref>
. This study is consistent with the report of Benson and Brown, but also demonstrates that a visual fixation point is not needed as a reference point for heading determination.</p>
<p>In the visual heading experiments a fixation point was used for all trials. This was because many of the prior experiments on visual heading also used a fixation point
<xref ref-type="bibr" rid="pone.0051383-Crowell2">[7]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
. If a fixation point is not used the initial eye position would not be controlled and could influence heading perception
<xref ref-type="bibr" rid="pone.0051383-Johnston1">[13]</xref>
. However, in studies where no fixation point was used, it was thought that the heading perception was similar to previous studies that did use a fixation point
<xref ref-type="bibr" rid="pone.0051383-Li1">[9]</xref>
.</p>
<p>Visual heading discrimination is excellent when differentiating headings near straight ahead
<xref ref-type="bibr" rid="pone.0051383-Warren2">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Britten2">[25]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Royden1">[45]</xref>
but becomes much less precise for more lateral headings
<xref ref-type="bibr" rid="pone.0051383-Crowell1">[3]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
. In the current study the fall off in visual heading sensitivity with eccentric headings was not as great as previously described
<xref ref-type="bibr" rid="pone.0051383-Crowell1">[3]</xref>
which may be due to the larger horizontal field of view (FOV) in the current experiment (98°) which provided more optic flow information
<xref ref-type="bibr" rid="pone.0051383-Koenderink1">[16]</xref>
. The precision of heading estimation in the current study was only slightly worse than that previously described in a forced choice discrimination technique
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
using a similar display size (90°). This difference may be due to variability in orienting the dial in the current experiments, which would not be an issue in a forced choice discrimination task.</p>
<p>The accuracy of heading perception is likely related to the size of the FOV with heading being theoretically nearly impossible to determine with the FOV <2–3° and reaching a theoretical maximum about 100°
<xref ref-type="bibr" rid="pone.0051383-Koenderink1">[16]</xref>
. Although the FOV used in the current study at 98° was larger than that in some previous work on heading estimation – i.e. 45°
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
or 60°
<xref ref-type="bibr" rid="pone.0051383-Telford1">[12]</xref>
, the FOV did not include the full visual field and covered only 27% of the range of headings (360°). Thus for the visual headings at 50° and further lateral the FOE was not visible. It has previously been suggested that triangulation error may cause overestimation of headings from optic flow when the FOE is outside the FOV based on postural data
<xref ref-type="bibr" rid="pone.0051383-Bardy1">[46]</xref>
as well as heading perception
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
. The Li et al. study on heading determination found overestimation in only 1 of 8 human subjects and that subject had advanced retinitis pigmentosa and was consciously aware of the strategy
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
suggesting that this strategy not generally applicable. A study which included large headings of up to 90°, many of which had a FOE outside the 53°FOV also found overestimates but, as with the current data, this was true even for headings with the FOE within the FOV
<xref ref-type="bibr" rid="pone.0051383-Warren4">[15]</xref>
. Triangulation error predicts that the FOE will be estimated as too lateral only when the FOE is not visible because when it is visible it may be directly identified or estimated using velocity vectors calculated from fiducials on both sides of the FOE. In the current data the bias is seen well before the limits of the FOV (
<xref ref-type="fig" rid="pone-0051383-g002">Fig. 2B</xref>
and
<xref ref-type="fig" rid="pone-0051383-g004">4B</xref>
) so the bias cannot be explained with errors in triangulation. Furthermore, in the current data the overestimation bias reaches a maximum at about 45° (
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4B</xref>
) and the bias decreases with further lateral headings out to about 100°. A similar effect has also previously been described for headings in the range of 0 to 90°
<xref ref-type="bibr" rid="pone.0051383-Warren4">[15]</xref>
. These observations are not consistent with this bias being due to triangulation error as that theory predicts the maximum bias near 90°. Thus the biases seen in visual heading estimation are not likely to be related to the size of the FOV or triangulation of the FOE when it is outside the FOV.</p>
<p>How the heading is reported is a potentially important issue in interpreting heading estimates. In prior studies the perceived heading has been reported either with a cursor on a screen which allows the subject to make a direct mapping between the perceived heading and the visual display
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Johnston1">[13]</xref>
or by orienting an object in space which is potentially more dependent on haptic influences
<xref ref-type="bibr" rid="pone.0051383-Telford1">[12]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Warren4">[15]</xref>
. In the current study, the second method was used because the full range of headings cannot be represented within the visual field, and this technique was felt to be more appropriate for reporting non-visual heading perception. Using either of these techniques the measured headings are also potentially influenced by haptic and motor systems which could also influence the bias estimate. One way to eliminate this influence would be to study heading discrimination - i.e. using a forced choice task to report the direction of a test stimulus relative to a reference stimulus or position (i.e. straight ahead). Although discrimination methods have been used extensively
<xref ref-type="bibr" rid="pone.0051383-Warren1">[2]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Crowell2">[7]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Fetsch3">[34]</xref>
, they do not permit measurement of bias. In the current study several control experiments were conducted to measure the potential bias introduced by these haptic and motor influences. First a subthreshold (ST) vestibular stimulus was used to see if subjects had a tendency to choose some headings over others (
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1C</xref>
) which was not the case. Second, the perceived direction of spoken headings were measured to provide a metric for spatial cognition, motor, and haptic biases independent of sensory manipulation. If the pattern of biases seen with visual and vestibular heading estimation (
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4A&B</xref>
) were similar to those measured using a spoken stimulus (
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4C</xref>
) it would imply that the observed biases are primarily due to motor or haptic biases but this was not the case. Although there was some bias in the spoken condition it was minimal for angles near straight ahead, much less than was seen for visual and vestibular stimuli at other angles. Third, when the spoken, vestibular, and visual conditions were repeated using a limited range of ±45° (
<xref ref-type="fig" rid="pone-0051383-g002">Fig. 2</xref>
) the limited range had a much greater influence on the visual and vestibular biases than the biases in perception of spoken headings. Thus it seems most likely that the major influence on the biases was related to heading estimation from sensory stimuli rather than motor or haptic issues.</p>
<p>An interesting result of this study is that for the spoken heading condition there was very little bias for headings near straight ahead. Over ±20° the average bias was <1° at every heading even when a full range of headings was presented (
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4C</xref>
). However, there was significant direction specific bias for both the visual and vestibular conditions within this range – at 20° the mean visual bias was 7.3° and the mean vestibular bias was 4.3°. Thus, when the perceived headings with the spoken heading are correlated with visual (
<xref ref-type="fig" rid="pone-0051383-g006">Fig. 6A</xref>
) or vestibular (
<xref ref-type="fig" rid="pone-0051383-g006">Fig. 6C</xref>
) there appears to be a non-linear correlation in biases near 0°. This may be evidence for a prior expectation of a straight ahead movement or it may be due to better tuning towards straight ahead since this is most of our day-to-day experience.</p>
<p>There is evidence that area MSTd is key to both visual and vestibular heading estimation
<xref ref-type="bibr" rid="pone.0051383-Britten2">[25]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Duffy1">[27]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Page1">[31]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Gu4">[47]</xref>
. Within MSTd not all headings are represented equally
<xref ref-type="bibr" rid="pone.0051383-Gu4">[47]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Duffy2">[48]</xref>
, and recent modeling using a population vector decoder
<xref ref-type="bibr" rid="pone.0051383-Georgopoulos1">[30]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Sanger1">[32]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Lappe1">[49]</xref>
has demonstrated that this can explain why both visual and vestibular heading discrimination becomes worse with more eccentric trajectories
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
. This model also predicted that both visual and vestibular heading estimates would have large biases due to overestimation of the lateral component of the heading vector. With regard to these predictions, Gu et al. concluded that, “it is unlikely that humans or monkeys exhibit behavioral biases in heading estimation as large as those predicted by the population vector decoder, but at present there is no data to verify or contradict this assertion.”
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
However, the current paper tested their predicted biases and demonstrated a trend similar to that predicted by their PVD model (
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1A, B, D</xref>
,
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4A, B</xref>
, and
<xref ref-type="fig" rid="pone-0051383-g005">Fig. 5</xref>
) for both visual and vestibular heading estimation.</p>
<p>There were some interesting differences in the predictions of the PVD model
<xref ref-type="bibr" rid="pone.0051383-Gu3">[29]</xref>
and the observed headings. The predicted trend in vestibular (
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4A</xref>
) heading estimation, although qualitatively similar, was not as large as those observed as can be seen by the slope of 0.34 (
<xref ref-type="fig" rid="pone-0051383-g005">Fig. 5A</xref>
). It was also of interest that the model predicted a larger bias for headings with a forward component (headings of −90 to 90°,
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4A</xref>
) than those with a backward component but this fore-aft dependent difference bias was not evident in the data. For visual headings the model predicts an opposite trend in biases such that direction specific biases would be larger for backward motion (
<xref ref-type="fig" rid="pone-0051383-g004">Fig. 4B</xref>
), which was also not observed here. The reasons for this may be due to species differences or individual variation as the variation between subjects was considerable even in the current human data and the Gu et al. model was based on a sampling of individual neurons in two monkeys. It is also possible that changes outside of MSTd influenced perception that were not predicted by the model. However the similar trends between the PVD model predictions and the observed visual and vestibular heading errors suggest that these heading errors may be explained by the mechanism suggested by the PVD model.</p>
<p>The relevance of the PVD model presumably does not depend on the range of headings tested, yet the direction specific biases seem to be minimized in studies where a limited range of headings were tested
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Li2">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Johnston1">[13]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Warren4">[15]</xref>
. In the current study some overestimates in heading estimation were seen in the middle of the range tested (see 20–25° in
<xref ref-type="fig" rid="pone-0051383-g002">Fig. 2</xref>
) and similar behavior was also seen by others when the raw responses were considered
<xref ref-type="bibr" rid="pone.0051383-DAvossa1">[10]</xref>
. These overestimates away from the limits of the range of test stimuli suggest that the PVD model is still relevant. It is likely that these large biases are masked by the subject’s estimate of the range of stimuli when they have knowledge or gain knowledge during the experiment that the range of headings is limited.</p>
<p>Although our data is consistent with these biases occurring as a result of the known physiology in MSTd, there are also other possibilities. The population decoder model predicts that these biases are caused by over-representation of units with sensitivity to lateral headings relative to those with sensitivity to fore-aft headings, but this over-representation could also occur in other areas. Lateral headings are also over represented in the otolith end organs. In the monkey ¾ of otolith afferents respond to ipsilateral tilt
<xref ref-type="bibr" rid="pone.0051383-Fernandez1">[50]</xref>
and the utricle orientation in human anatomy also suggests better sensitivity to lateral motion
<xref ref-type="bibr" rid="pone.0051383-Rosenhall1">[51]</xref>
,
<xref ref-type="bibr" rid="pone.0051383-Rosenhall2">[52]</xref>
.</p>
<p>The possibility that observed direction specific biases may have been caused by a skewed internal representation of space was considered. The response to sub-threshold (ST) stimuli (
<xref ref-type="fig" rid="pone-0051383-g001">Fig. 1C</xref>
) did not demonstrate a preference for certain headings. This made it less likely the observed biases were related to positioning of the dial alone. The spoken heading task was conceived as an estimate of each subject’s internal spatial representation. The spoken biases leave open the possibility that the observed biases are, at least in part, due to a distortion in spatial representation. However, the direction dependent biases to spoken headings, were usually smaller than those seen for visual and vestibular stimuli (
<xref ref-type="fig" rid="pone-0051383-g001">Figs. 1E</xref>
and
<xref ref-type="fig" rid="pone-0051383-g004">4C</xref>
). It is possible that the larger bias in heading with visual and vestibular conditions is due to the sensory bias being combined with the internal bias.</p>
<p>Since the spoken, visual, and vestibular tasks had qualitatively similar direction specific biases, it is possible that the larger direction dependent biases seen during visual and vestibular heading estimation relative to the spoken condition could derived from an increase in a gain factor perhaps due to a more realistic stimulus during the visual and vestibular conditions. This possibility was investigated by measuring the correlation between the shapes of these bias curves (
<xref ref-type="fig" rid="pone-0051383-g011">Figs. 11</xref>
and
<xref ref-type="fig" rid="pone-0051383-g012">12</xref>
). There was a strong correlation between the shape of the bias curve during spoken and visual conditions although the amplitude of the biases was smaller in the spoken condition in 9 out of 10 subjects. The similar shape of visual and spoken bias curves within each subject despite variation between subjects suggests that the visual and spoken heading estimation tasks have a commonality that is lacking between these modalities and the vestibular task which had a weaker correlation with both the spoken and visual tasks in almost every subject (
<xref ref-type="fig" rid="pone-0051383-g012">Fig. 12B</xref>
). One possible explanation for this finding is that during the spoken condition subjects may imagine a visual heading potentially evoking a common mechanism. Given that the known physiology in MSTd can explain the observed heading biases
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
it is less likely that there is a separate mechanism causing a smaller but otherwise similar bias in the spoken condition. The observation that the correlation in the direction specific biases appeared to be a different shape in the vestibular condition may reflect direction specificity in the vestibular end organs
<xref ref-type="bibr" rid="pone.0051383-Fernandez1">[50]</xref>
.</p>
<p>The current data demonstrate that human heading perception is biased causing both visual and vestibular eccentric headings to be overestimated. This may go unnoticed during ordinary behavior when heading estimation includes multiple sensory stimuli and is accompanied by immediate feedback. This bias likely occurs as a result of having better sensitivity to changes of heading relative to straight ahead. There are obvious advantages to having heading discrimination best near straight ahead since this is the heading we most commonly experience, and detection of slight deviations from it (such as when driving or running down a narrow path) is important. This could explain why MSTd as well as the otolith organs have a disproportionate number of units that are sensitive to the lateral component of motion. A PVD is a relatively straightforward and computationally efficient method for the central nervous system to interpret these heading signals, but has the disadvantage of causing biases when a disproportionate number of units are tuned to a detect lateral motion. Although a ML model would avoid such biases in visual and vestibular heading estimation
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
it would be computationally more complex, and the current observations argue it is not used. The current findings argue that for heading perception the central nervous system adopts a strategy that is more computationally efficient rather than one that would avoid bias. There are certainly possible evolutionary pressures for heading estimation to be accurate over a range of angles, for instance when aiming at prey or when choosing to follow a new course. However, during most such situations there is immediate feedback which may make the types of biases seen in the current study less relevant for natural activities, where speed may be paramount.</p>
</sec>
</body>
<back>
<ack>
<p>I would like to thank Greg DeAngelis for providing the modeling data previously published
<xref ref-type="bibr" rid="pone.0051383-Gu1">[8]</xref>
and reviewing a pre-publication version of this manuscript. I thank Justin Chan and Shawn Olmstead-O’Leahey for technical assistance.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0051383-Gibson1">
<label>1</label>
<mixed-citation publication-type="other">Gibson JJ (1950) The perception of the visual world. Boston: Houghton Mifflin. 235 p.</mixed-citation>
</ref>
<ref id="pone.0051383-Warren1">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Warren</surname>
<given-names>WH</given-names>
<suffix>Jr</suffix>
</name>
,
<name>
<surname>Morris</surname>
<given-names>MW</given-names>
</name>
,
<name>
<surname>Kalish</surname>
<given-names>M</given-names>
</name>
(
<year>1988</year>
)
<article-title>Perception of translational heading from optical flow</article-title>
.
<source>J Exp Psychol Hum Percept Perform</source>
<volume>14</volume>
:
<fpage>646</fpage>
<lpage>660</lpage>
<pub-id pub-id-type="pmid">2974874</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Crowell1">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Crowell</surname>
<given-names>JA</given-names>
</name>
,
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
(
<year>1996</year>
)
<article-title>Ideal observer for heading judgments</article-title>
.
<source>Vision research</source>
<volume>36</volume>
:
<fpage>471</fpage>
<lpage>490</lpage>
<pub-id pub-id-type="pmid">8746236</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Britten1">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Britten</surname>
<given-names>KH</given-names>
</name>
(
<year>2008</year>
)
<article-title>Mechanisms of self-motion perception</article-title>
.
<source>Annu Rev Neurosci</source>
<volume>31</volume>
:
<fpage>389</fpage>
<lpage>410</lpage>
<pub-id pub-id-type="pmid">18558861</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Warren2">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Warren</surname>
<given-names>WH</given-names>
<suffix>Jr</suffix>
</name>
,
<name>
<surname>Hannon</surname>
<given-names>DJ</given-names>
</name>
(
<year>1988</year>
)
<article-title>Direction of self-motion perceived from optical flow</article-title>
.
<source>Nature</source>
<volume>336</volume>
:
<fpage>162</fpage>
<lpage>163</lpage>
</mixed-citation>
</ref>
<ref id="pone.0051383-Warren3">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Warren</surname>
<given-names>WH</given-names>
</name>
,
<name>
<surname>Kurtz</surname>
<given-names>KJ</given-names>
</name>
(
<year>1992</year>
)
<article-title>The role of central and peripheral vision in perceiving the direction of self-motion</article-title>
.
<source>Percept Psychophys</source>
<volume>51</volume>
:
<fpage>443</fpage>
<lpage>454</lpage>
<pub-id pub-id-type="pmid">1594434</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Crowell2">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Crowell</surname>
<given-names>JA</given-names>
</name>
,
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
(
<year>1993</year>
)
<article-title>Perceiving heading with different retinal regions and types of optic flow</article-title>
.
<source>Perception & psychophysics</source>
<volume>53</volume>
:
<fpage>325</fpage>
<lpage>337</lpage>
<pub-id pub-id-type="pmid">8483696</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Gu1">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
,
<name>
<surname>Adeyemo</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
(
<year>2010</year>
)
<article-title>Decoding of MSTd population activity accounts for variations in the precision of heading perception</article-title>
.
<source>Neuron</source>
<volume>66</volume>
:
<fpage>596</fpage>
<lpage>609</lpage>
<pub-id pub-id-type="pmid">20510863</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Li1">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Li</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Sweet</surname>
<given-names>BT</given-names>
</name>
,
<name>
<surname>Stone</surname>
<given-names>LS</given-names>
</name>
(
<year>2006</year>
)
<article-title>Humans can perceive heading without visual path information</article-title>
.
<source>J Vis</source>
<volume>6</volume>
:
<fpage>874</fpage>
<lpage>881</lpage>
<pub-id pub-id-type="pmid">17083281</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-DAvossa1">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>D'Avossa</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Kersten</surname>
<given-names>D</given-names>
</name>
(
<year>1996</year>
)
<article-title>Evidence in human subjects for independent coding of azimuth and elevation for direction of heading from optic flow</article-title>
.
<source>Vision research</source>
<volume>36</volume>
:
<fpage>2915</fpage>
<lpage>2924</lpage>
<pub-id pub-id-type="pmid">8917793</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Li2">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Li</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Peli</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Warren</surname>
<given-names>WH</given-names>
</name>
(
<year>2002</year>
)
<article-title>Heading perception in patients with advanced retinitis pigmentosa</article-title>
.
<source>Optom Vis Sci</source>
<volume>79</volume>
:
<fpage>581</fpage>
<lpage>589</lpage>
<pub-id pub-id-type="pmid">12322928</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Telford1">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Telford</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Howard</surname>
<given-names>IP</given-names>
</name>
(
<year>1996</year>
)
<article-title>Role of optical flow field asymmetry in the perception of heading during linear motion</article-title>
.
<source>Percept Psychophys</source>
<volume>58</volume>
:
<fpage>283</fpage>
<lpage>288</lpage>
<pub-id pub-id-type="pmid">8838170</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Johnston1">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Johnston</surname>
<given-names>IR</given-names>
</name>
,
<name>
<surname>White</surname>
<given-names>GR</given-names>
</name>
,
<name>
<surname>Cumming</surname>
<given-names>RW</given-names>
</name>
(
<year>1973</year>
)
<article-title>The role of optical expansion patterns in locomotor control</article-title>
.
<source>Am J Psychol</source>
<volume>86</volume>
:
<fpage>311</fpage>
<lpage>324</lpage>
<pub-id pub-id-type="pmid">4775379</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Llewellyn1">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Llewellyn</surname>
<given-names>KR</given-names>
</name>
(
<year>1971</year>
)
<article-title>Visual guidance of locomotion</article-title>
.
<source>J Exp Psychol</source>
<volume>91</volume>
:
<fpage>245</fpage>
<lpage>261</lpage>
<pub-id pub-id-type="pmid">5134668</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Warren4">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Warren</surname>
<given-names>R</given-names>
</name>
(
<year>1976</year>
)
<article-title>The perception of egomotion</article-title>
.
<source>Journal of experimental psychology Human perception and performance</source>
<volume>2</volume>
:
<fpage>448</fpage>
<lpage>456</lpage>
<pub-id pub-id-type="pmid">993748</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Koenderink1">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Koenderink</surname>
<given-names>JJ</given-names>
</name>
,
<name>
<surname>van Doorn</surname>
<given-names>AJ</given-names>
</name>
(
<year>1987</year>
)
<article-title>Facts on optic flow</article-title>
.
<source>Biol Cybern</source>
<volume>56</volume>
:
<fpage>247</fpage>
<lpage>254</lpage>
<pub-id pub-id-type="pmid">3607100</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Ernst1">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
,
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
(
<year>2002</year>
)
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
:
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Angelaki1">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
,
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
(
<year>2009</year>
)
<article-title>Multisensory integration: psychophysics, neurophysiology, and computation</article-title>
.
<source>Curr Opin Neurobiol</source>
<volume>19</volume>
:
<fpage>1</fpage>
<lpage>7</lpage>
<pub-id pub-id-type="pmid">19545995</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Girshick1">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Girshick</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
,
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
(
<year>2011</year>
)
<article-title>Cardinal rules: visual orientation perception reflects knowledge of environmental statistics</article-title>
.
<source>Nat Neurosci</source>
<volume>14</volume>
:
<fpage>926</fpage>
<lpage>932</lpage>
<pub-id pub-id-type="pmid">21642976</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Stocker1">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stocker</surname>
<given-names>AA</given-names>
</name>
,
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
(
<year>2006</year>
)
<article-title>Noise characteristics and prior expectations in human visual speed perception</article-title>
.
<source>Nat Neurosci</source>
<volume>9</volume>
:
<fpage>578</fpage>
<lpage>585</lpage>
<pub-id pub-id-type="pmid">16547513</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-MacNeilage1">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>MacNeilage</surname>
<given-names>PR</given-names>
</name>
,
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
,
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
(
<year>2010</year>
)
<article-title>Vestibular heading discrimination and sensitivity to linear acceleration in head and world coordinates</article-title>
.
<source>J Neurosci</source>
<volume>30</volume>
:
<fpage>9084</fpage>
<lpage>9094</lpage>
<pub-id pub-id-type="pmid">20610742</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Fetsch1">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
,
<name>
<surname>Wang</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
(
<year>2007</year>
)
<article-title>Spatial reference frames of visual, vestibular, and multimodal heading signals in the dorsal subdivision of the medial superior temporal area</article-title>
.
<source>J Neurosci</source>
<volume>27</volume>
:
<fpage>700</fpage>
<lpage>712</lpage>
<pub-id pub-id-type="pmid">17234602</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Gu2">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
(
<year>2007</year>
)
<article-title>A functional link between area MSTd and heading perception based on vestibular signals</article-title>
.
<source>Nat Neurosci</source>
<volume>10</volume>
:
<fpage>1038</fpage>
<lpage>1047</lpage>
<pub-id pub-id-type="pmid">17618278</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Ivanenko1">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ivanenko</surname>
<given-names>YP</given-names>
</name>
,
<name>
<surname>Grasso</surname>
<given-names>R</given-names>
</name>
(
<year>1997</year>
)
<article-title>Integration of somatosensory and vestibular inputs in perceiving the direction of passive whole-body motion</article-title>
.
<source>Brain Res Cogn Brain Res</source>
<volume>5</volume>
:
<fpage>323</fpage>
<lpage>327</lpage>
<pub-id pub-id-type="pmid">9197519</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Britten2">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Britten</surname>
<given-names>KH</given-names>
</name>
,
<name>
<surname>van Wezel</surname>
<given-names>RJ</given-names>
</name>
(
<year>1998</year>
)
<article-title>Electrical microstimulation of cortical area MST biases heading perception in monkeys</article-title>
.
<source>Nat Neurosci</source>
<volume>1</volume>
:
<fpage>59</fpage>
<lpage>63</lpage>
<pub-id pub-id-type="pmid">10195110</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Chowdhury1">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Chowdhury</surname>
<given-names>SA</given-names>
</name>
,
<name>
<surname>Takahashi</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
(
<year>2009</year>
)
<article-title>Does the middle temporal area carry vestibular signals related to self-motion?</article-title>
<source>The Journal of neuroscience : the official journal of the Society for Neuroscience</source>
<volume>29</volume>
:
<fpage>12020</fpage>
<lpage>12030</lpage>
<pub-id pub-id-type="pmid">19776288</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Duffy1">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Duffy</surname>
<given-names>CJ</given-names>
</name>
(
<year>1998</year>
)
<article-title>MST neurons respond to optic flow and translational movement</article-title>
.
<source>J Neurophysiol</source>
<volume>80</volume>
:
<fpage>1816</fpage>
<lpage>1827</lpage>
<pub-id pub-id-type="pmid">9772241</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Fetsch2">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
,
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
(
<year>2012</year>
)
<article-title>Neural correlates of reliability-based cue weighting during multisensory integration</article-title>
.
<source>Nat Neurosci</source>
<volume>15</volume>
:
<fpage>146</fpage>
<lpage>154</lpage>
<pub-id pub-id-type="pmid">22101645</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Gu3">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
,
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
(
<year>2008</year>
)
<article-title>Neural correlates of multisensory cue integration in macaque MSTd</article-title>
.
<source>Nat Neurosci</source>
<volume>11</volume>
:
<fpage>1201</fpage>
<lpage>1210</lpage>
<pub-id pub-id-type="pmid">18776893</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Georgopoulos1">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Georgopoulos</surname>
<given-names>AP</given-names>
</name>
,
<name>
<surname>Schwartz</surname>
<given-names>AB</given-names>
</name>
,
<name>
<surname>Kettner</surname>
<given-names>RE</given-names>
</name>
(
<year>1986</year>
)
<article-title>Neuronal population coding of movement direction</article-title>
.
<source>Science</source>
<volume>233</volume>
:
<fpage>1416</fpage>
<lpage>1419</lpage>
<pub-id pub-id-type="pmid">3749885</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Page1">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Page</surname>
<given-names>WK</given-names>
</name>
,
<name>
<surname>Duffy</surname>
<given-names>CJ</given-names>
</name>
(
<year>2003</year>
)
<article-title>Heading representation in MST: sensory interactions and population encoding</article-title>
.
<source>J Neurophysiol</source>
<volume>89</volume>
:
<fpage>1994</fpage>
<lpage>2013</lpage>
<pub-id pub-id-type="pmid">12686576</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Sanger1">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sanger</surname>
<given-names>TD</given-names>
</name>
(
<year>1996</year>
)
<article-title>Probability density estimation for the interpretation of neural population codes</article-title>
.
<source>J Neurophysiol</source>
<volume>76</volume>
:
<fpage>2790</fpage>
<lpage>2793</lpage>
<pub-id pub-id-type="pmid">8899646</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Grabherr1">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Grabherr</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Nicoucar</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Mast</surname>
<given-names>FW</given-names>
</name>
,
<name>
<surname>Merfeld</surname>
<given-names>DM</given-names>
</name>
(
<year>2008</year>
)
<article-title>Vestibular thresholds for yaw rotation about an earth-vertical axis as a function of frequency</article-title>
.
<source>Exp Brain Res</source>
<volume>186</volume>
:
<fpage>677</fpage>
<lpage>681</lpage>
<pub-id pub-id-type="pmid">18350283</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Fetsch3">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
,
<name>
<surname>Turner</surname>
<given-names>AH</given-names>
</name>
,
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
(
<year>2009</year>
)
<article-title>Dynamic re-weighting of visual and vestibular cues during self-motion perception</article-title>
.
<source>J Neurosci</source>
<volume>29</volume>
:
<fpage>15601</fpage>
<lpage>15612</lpage>
<pub-id pub-id-type="pmid">20007484</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Roditi1">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Roditi</surname>
<given-names>RE</given-names>
</name>
,
<name>
<surname>Crane</surname>
<given-names>BT</given-names>
</name>
(
<year>2012</year>
)
<article-title>Directional asymmetries and age effects in human self-motion perception</article-title>
.
<source>J Assoc Res Otolaryngol</source>
<volume>13</volume>
:
<fpage>381</fpage>
<lpage>401</lpage>
<pub-id pub-id-type="pmid">22402987</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Benson1">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Benson</surname>
<given-names>AJ</given-names>
</name>
,
<name>
<surname>Hutt</surname>
<given-names>EC</given-names>
</name>
,
<name>
<surname>Brown</surname>
<given-names>SF</given-names>
</name>
(
<year>1989</year>
)
<article-title>Thresholds for the perception of whole body angular movement about a vertical axis</article-title>
.
<source>Aviat Space Environ Med</source>
<volume>60</volume>
:
<fpage>205</fpage>
<lpage>213</lpage>
<pub-id pub-id-type="pmid">2712798</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Roditi2">
<label>37</label>
<mixed-citation publication-type="other">Roditi RE, Crane BT (2011) Asymmetries in human vestibular perception thresholds. Association for Research in Otolarngology, 34th Annual Meeting. Baltimore, MD. 1006.</mixed-citation>
</ref>
<ref id="pone.0051383-Wichmann1">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wichmann</surname>
<given-names>FA</given-names>
</name>
,
<name>
<surname>Hill</surname>
<given-names>NJ</given-names>
</name>
(
<year>2001</year>
)
<article-title>The psychometric function: I. Fitting, sampling, and goodness of fit</article-title>
.
<source>Percept Psychophys</source>
<volume>63</volume>
:
<fpage>1293</fpage>
<lpage>1313</lpage>
<pub-id pub-id-type="pmid">11800458</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Roggeveen1">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Roggeveen</surname>
<given-names>LJ</given-names>
</name>
,
<name>
<surname>Nijhoff</surname>
<given-names>P</given-names>
</name>
(
<year>1956</year>
)
<article-title>The normal and pathological threshold of the perception of angular accelerations for the optogyral illusion and the turning sensation</article-title>
.
<source>Acta Otolaryngol</source>
<volume>46</volume>
:
<fpage>533</fpage>
<lpage>541</lpage>
<pub-id pub-id-type="pmid">13394286</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Carriot1">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Carriot</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Bryan</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>DiZio</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Lackner</surname>
<given-names>JR</given-names>
</name>
(
<year>2011</year>
)
<article-title>The oculogyral illusion: retinal and oculomotor factors</article-title>
.
<source>Exp Brain Res</source>
<volume>209</volume>
:
<fpage>415</fpage>
<lpage>423</lpage>
<pub-id pub-id-type="pmid">21298422</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Benson2">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Benson</surname>
<given-names>AJ</given-names>
</name>
,
<name>
<surname>Brown</surname>
<given-names>SF</given-names>
</name>
(
<year>1989</year>
)
<article-title>Visual display lowers detection threshold of angular, but not linear, whole-body motion stimuli</article-title>
.
<source>Aviat Space Environ Med</source>
<volume>60</volume>
:
<fpage>629</fpage>
<lpage>633</lpage>
<pub-id pub-id-type="pmid">2764843</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Clark1">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Clark</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Stewart</surname>
<given-names>JD</given-names>
</name>
(
<year>1968</year>
)
<article-title>Comparison of sensitivity for the perception of bodily rotation and the oculogyral illusion</article-title>
.
<source>Percept Psychophys</source>
<volume>3</volume>
:
<fpage>253</fpage>
<lpage>256</lpage>
</mixed-citation>
</ref>
<ref id="pone.0051383-Dockstader1">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dockstader</surname>
<given-names>SL</given-names>
</name>
(
<year>1971</year>
)
<article-title>Comparison of cupulometric and psychophysical thresholds for perception of rotation and the oculogyral illusion</article-title>
.
<source>Percept Psychophys</source>
<volume>9</volume>
:
<fpage>299</fpage>
<lpage>302</lpage>
</mixed-citation>
</ref>
<ref id="pone.0051383-Huang1">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Huang</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Young</surname>
<given-names>LR</given-names>
</name>
(
<year>1981</year>
)
<article-title>Sensation of rotation about a vertical axis with a fixed visual field in different illuminations and in the dark</article-title>
.
<source>Exp Brain Res</source>
<volume>41</volume>
:
<fpage>172</fpage>
<lpage>183</lpage>
<pub-id pub-id-type="pmid">7202613</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Royden1">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Royden</surname>
<given-names>CS</given-names>
</name>
,
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
,
<name>
<surname>Crowell</surname>
<given-names>JA</given-names>
</name>
(
<year>1992</year>
)
<article-title>The perception of heading during eye movements</article-title>
.
<source>Nature</source>
<volume>360</volume>
:
<fpage>583</fpage>
<lpage>585</lpage>
<pub-id pub-id-type="pmid">1461280</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Bardy1">
<label>46</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bardy</surname>
<given-names>BG</given-names>
</name>
,
<name>
<surname>Warren</surname>
<given-names>WH</given-names>
<suffix>Jr</suffix>
</name>
,
<name>
<surname>Kay</surname>
<given-names>BA</given-names>
</name>
(
<year>1999</year>
)
<article-title>The role of central and peripheral vision in postural control during walking</article-title>
.
<source>Percept Psychophys</source>
<volume>61</volume>
:
<fpage>1356</fpage>
<lpage>1368</lpage>
<pub-id pub-id-type="pmid">10572464</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Gu4">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Watkins</surname>
<given-names>PV</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
,
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
(
<year>2006</year>
)
<article-title>Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area</article-title>
.
<source>J Neurosci</source>
<volume>26</volume>
:
<fpage>73</fpage>
<lpage>85</lpage>
<pub-id pub-id-type="pmid">16399674</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Duffy2">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Duffy</surname>
<given-names>CJ</given-names>
</name>
,
<name>
<surname>Wurtz</surname>
<given-names>RH</given-names>
</name>
(
<year>1995</year>
)
<article-title>Response of monkey MST neurons to optic flow stimuli with shifted centers of motion</article-title>
.
<source>The Journal of neuroscience : the official journal of the Society for Neuroscience</source>
<volume>15</volume>
:
<fpage>5192</fpage>
<lpage>5208</lpage>
<pub-id pub-id-type="pmid">7623145</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Lappe1">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lappe</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Bremmer</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Pekel</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Thiele</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Hoffmann</surname>
<given-names>KP</given-names>
</name>
(
<year>1996</year>
)
<article-title>Optic flow processing in monkey STS: a theoretical and experimental approach</article-title>
.
<source>J Neurosci</source>
<volume>16</volume>
:
<fpage>6265</fpage>
<lpage>6285</lpage>
<pub-id pub-id-type="pmid">8815907</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Fernandez1">
<label>50</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fernandez</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Goldberg</surname>
<given-names>JM</given-names>
</name>
(
<year>1976</year>
)
<article-title>Physiology of peripheral neurons innervating otolith organs of the squirrel monkey. I. Response to static tilts and to long-duration centrifugal force</article-title>
.
<source>J Neurophysiol</source>
<volume>39</volume>
:
<fpage>970</fpage>
<lpage>984</lpage>
<pub-id pub-id-type="pmid">824412</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Rosenhall1">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rosenhall</surname>
<given-names>U</given-names>
</name>
(
<year>1972</year>
)
<article-title>Vestibular macular mapping in man</article-title>
.
<source>Ann Otol Rhinol Laryngol</source>
<volume>81</volume>
:
<fpage>339</fpage>
<lpage>351</lpage>
<pub-id pub-id-type="pmid">4113136</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0051383-Rosenhall2">
<label>52</label>
<mixed-citation publication-type="other">Rosenhall U, Engstrom B (1974) Surface structures of the human vestibular sensory regions. Acta Otolaryngol Suppl 319: 3–18.</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002239 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002239 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3517556
   |texte=   Direction Specific Biases in Human Visual and Vestibular Heading Perception
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:23236490" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024