Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Self versus Environment Motion in Postural Control

Identifieur interne : 001455 ( Ncbi/Merge ); précédent : 001454; suivant : 001456

Self versus Environment Motion in Postural Control

Auteurs : Kalpana Dokka [États-Unis] ; Robert V. Kenyon [États-Unis] ; Emily A. Keshner [États-Unis] ; Konrad P. Kording [États-Unis]

Source :

RBID : PMC:2824754

Abstract

To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.


Url:
DOI: 10.1371/journal.pcbi.1000680
PubMed: 20174552
PubMed Central: 2824754

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:2824754

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Self versus Environment Motion in Postural Control</title>
<author>
<name sortKey="Dokka, Kalpana" sort="Dokka, Kalpana" uniqKey="Dokka K" first="Kalpana" last="Dokka">Kalpana Dokka</name>
<affiliation wicri:level="2">
<nlm:aff id="aff1">
<addr-line>Department of Anatomy and Neurobiology, Washington University, Saint Louis, Missouri, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Anatomy and Neurobiology, Washington University, Saint Louis, Missouri</wicri:regionArea>
<placeName>
<region type="state">Missouri (État)</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Kenyon, Robert V" sort="Kenyon, Robert V" uniqKey="Kenyon R" first="Robert V." last="Kenyon">Robert V. Kenyon</name>
<affiliation wicri:level="2">
<nlm:aff id="aff2">
<addr-line>Department of Computer Science, University of Illinois at Chicago, Chicago, Illinois, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Computer Science, University of Illinois at Chicago, Chicago, Illinois</wicri:regionArea>
<placeName>
<region type="state">Illinois</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Keshner, Emily A" sort="Keshner, Emily A" uniqKey="Keshner E" first="Emily A." last="Keshner">Emily A. Keshner</name>
<affiliation wicri:level="2">
<nlm:aff id="aff3">
<addr-line>Department of Physical Therapy, Temple University, Philadelphia, Pennsylvania, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Physical Therapy, Temple University, Philadelphia, Pennsylvania</wicri:regionArea>
<placeName>
<region type="state">Pennsylvanie</region>
</placeName>
</affiliation>
<affiliation wicri:level="2">
<nlm:aff id="aff4">
<addr-line>Department of Electrical and Computer Engineering, Temple University, Philadelphia, Pennsylvania, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Electrical and Computer Engineering, Temple University, Philadelphia, Pennsylvania</wicri:regionArea>
<placeName>
<region type="state">Pennsylvanie</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Kording, Konrad P" sort="Kording, Konrad P" uniqKey="Kording K" first="Konrad P." last="Kording">Konrad P. Kording</name>
<affiliation wicri:level="2">
<nlm:aff id="aff5">
<addr-line>Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Chicago, Chicago, Illinois, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Chicago, Chicago, Illinois</wicri:regionArea>
<placeName>
<region type="state">Illinois</region>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">20174552</idno>
<idno type="pmc">2824754</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2824754</idno>
<idno type="RBID">PMC:2824754</idno>
<idno type="doi">10.1371/journal.pcbi.1000680</idno>
<date when="2010">2010</date>
<idno type="wicri:Area/Pmc/Corpus">002169</idno>
<idno type="wicri:Area/Pmc/Curation">002169</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001D57</idno>
<idno type="wicri:Area/Ncbi/Merge">001455</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Self versus Environment Motion in Postural Control</title>
<author>
<name sortKey="Dokka, Kalpana" sort="Dokka, Kalpana" uniqKey="Dokka K" first="Kalpana" last="Dokka">Kalpana Dokka</name>
<affiliation wicri:level="2">
<nlm:aff id="aff1">
<addr-line>Department of Anatomy and Neurobiology, Washington University, Saint Louis, Missouri, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Anatomy and Neurobiology, Washington University, Saint Louis, Missouri</wicri:regionArea>
<placeName>
<region type="state">Missouri (État)</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Kenyon, Robert V" sort="Kenyon, Robert V" uniqKey="Kenyon R" first="Robert V." last="Kenyon">Robert V. Kenyon</name>
<affiliation wicri:level="2">
<nlm:aff id="aff2">
<addr-line>Department of Computer Science, University of Illinois at Chicago, Chicago, Illinois, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Computer Science, University of Illinois at Chicago, Chicago, Illinois</wicri:regionArea>
<placeName>
<region type="state">Illinois</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Keshner, Emily A" sort="Keshner, Emily A" uniqKey="Keshner E" first="Emily A." last="Keshner">Emily A. Keshner</name>
<affiliation wicri:level="2">
<nlm:aff id="aff3">
<addr-line>Department of Physical Therapy, Temple University, Philadelphia, Pennsylvania, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Physical Therapy, Temple University, Philadelphia, Pennsylvania</wicri:regionArea>
<placeName>
<region type="state">Pennsylvanie</region>
</placeName>
</affiliation>
<affiliation wicri:level="2">
<nlm:aff id="aff4">
<addr-line>Department of Electrical and Computer Engineering, Temple University, Philadelphia, Pennsylvania, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Electrical and Computer Engineering, Temple University, Philadelphia, Pennsylvania</wicri:regionArea>
<placeName>
<region type="state">Pennsylvanie</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Kording, Konrad P" sort="Kording, Konrad P" uniqKey="Kording K" first="Konrad P." last="Kording">Konrad P. Kording</name>
<affiliation wicri:level="2">
<nlm:aff id="aff5">
<addr-line>Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Chicago, Chicago, Illinois, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Chicago, Chicago, Illinois</wicri:regionArea>
<placeName>
<region type="state">Illinois</region>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS Computational Biology</title>
<idno type="ISSN">1553-734X</idno>
<idno type="eISSN">1553-7358</idno>
<imprint>
<date when="2010">2010</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Fushiki, H" uniqKey="Fushiki H">H Fushiki</name>
</author>
<author>
<name sortKey="Kobayashi, K" uniqKey="Kobayashi K">K Kobayashi</name>
</author>
<author>
<name sortKey="Asai, M" uniqKey="Asai M">M Asai</name>
</author>
<author>
<name sortKey="Watanabe, Y" uniqKey="Watanabe Y">Y Watanabe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thurrell, Ae" uniqKey="Thurrell A">AE Thurrell</name>
</author>
<author>
<name sortKey="Bronstein, Am" uniqKey="Bronstein A">AM Bronstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keshner, Ea" uniqKey="Keshner E">EA Keshner</name>
</author>
<author>
<name sortKey="Dokka, K" uniqKey="Dokka K">K Dokka</name>
</author>
<author>
<name sortKey="Kenyon, Rv" uniqKey="Kenyon R">RV Kenyon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peterka, Rj" uniqKey="Peterka R">RJ Peterka</name>
</author>
<author>
<name sortKey="Benolken, Ms" uniqKey="Benolken M">MS Benolken</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mergner, T" uniqKey="Mergner T">T Mergner</name>
</author>
<author>
<name sortKey="Schweigart, G" uniqKey="Schweigart G">G Schweigart</name>
</author>
<author>
<name sortKey="Maurer, C" uniqKey="Maurer C">C Maurer</name>
</author>
<author>
<name sortKey="Blumle, A" uniqKey="Blumle A">A Blumle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dokka, K" uniqKey="Dokka K">K Dokka</name>
</author>
<author>
<name sortKey="Kenyon, Rv" uniqKey="Kenyon R">RV Kenyon</name>
</author>
<author>
<name sortKey="Keshner, Ea" uniqKey="Keshner E">EA Keshner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keshner, Ea" uniqKey="Keshner E">EA Keshner</name>
</author>
<author>
<name sortKey="Kenyon, Rv" uniqKey="Kenyon R">RV Kenyon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peterka, Rj" uniqKey="Peterka R">RJ Peterka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jeka, J" uniqKey="Jeka J">J Jeka</name>
</author>
<author>
<name sortKey="Oie, Ks" uniqKey="Oie K">KS Oie</name>
</author>
<author>
<name sortKey="Kiemel, T" uniqKey="Kiemel T">T Kiemel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, Dn" uniqKey="Lee D">DN Lee</name>
</author>
<author>
<name sortKey="Lishman, Jr" uniqKey="Lishman J">JR Lishman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lishman, Jr" uniqKey="Lishman J">JR Lishman</name>
</author>
<author>
<name sortKey="Lee, Dn" uniqKey="Lee D">DN Lee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wright, Wg" uniqKey="Wright W">WG Wright</name>
</author>
<author>
<name sortKey="Dizio, P" uniqKey="Dizio P">P DiZio</name>
</author>
<author>
<name sortKey="Lackner, Jr" uniqKey="Lackner J">JR Lackner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Horak, Fb" uniqKey="Horak F">FB Horak</name>
</author>
<author>
<name sortKey="Macpherson, Jm" uniqKey="Macpherson J">JM MacPherson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oie, Ks" uniqKey="Oie K">KS Oie</name>
</author>
<author>
<name sortKey="Kiemel, T" uniqKey="Kiemel T">T Kiemel</name>
</author>
<author>
<name sortKey="Jeka, Jj" uniqKey="Jeka J">JJ Jeka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keshner, Ea" uniqKey="Keshner E">EA Keshner</name>
</author>
<author>
<name sortKey="Kenyon, Rv" uniqKey="Kenyon R">RV Kenyon</name>
</author>
<author>
<name sortKey="Langston, J" uniqKey="Langston J">J Langston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blumle, A" uniqKey="Blumle A">A Blumle</name>
</author>
<author>
<name sortKey="Maurer, C" uniqKey="Maurer C">C Maurer</name>
</author>
<author>
<name sortKey="Schweigart, G" uniqKey="Schweigart G">G Schweigart</name>
</author>
<author>
<name sortKey="Mergner, T" uniqKey="Mergner T">T Mergner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
<author>
<name sortKey="Johnston, Eb" uniqKey="Johnston E">EB Johnston</name>
</author>
<author>
<name sortKey="Young, M" uniqKey="Young M">M Young</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sober, Sj" uniqKey="Sober S">SJ Sober</name>
</author>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, Rj" uniqKey="Van Beers R">RJ van Beers</name>
</author>
<author>
<name sortKey="Sittig, Ac" uniqKey="Sittig A">AC Sittig</name>
</author>
<author>
<name sortKey="Denier Van Der Gon, Jj" uniqKey="Denier Van Der Gon J">JJ Denier van der Gon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bulthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
<author>
<name sortKey="Ghahramani, Z" uniqKey="Ghahramani Z">Z Ghahramani</name>
</author>
<author>
<name sortKey="Jordan, Mi" uniqKey="Jordan M">MI Jordan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Richards, W" uniqKey="Richards W">W Richards</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hartung, B" uniqKey="Hartung B">B Hartung</name>
</author>
<author>
<name sortKey="Schrater, Pr" uniqKey="Schrater P">PR Schrater</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bulthoff</name>
</author>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D Kersten</name>
</author>
<author>
<name sortKey="Franz, Vh" uniqKey="Franz V">VH Franz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Kording</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
<author>
<name sortKey="Adelson, Eh" uniqKey="Adelson E">EH Adelson</name>
</author>
<author>
<name sortKey="Heeger, Dj" uniqKey="Heeger D">DJ Heeger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocker, Aa" uniqKey="Stocker A">AA Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocker, Aa" uniqKey="Stocker A">AA Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiss, Y" uniqKey="Weiss Y">Y Weiss</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
<author>
<name sortKey="Adelson, Eh" uniqKey="Adelson E">EH Adelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fitzpatrick, R" uniqKey="Fitzpatrick R">R Fitzpatrick</name>
</author>
<author>
<name sortKey="Mccloskey, Di" uniqKey="Mccloskey D">DI McCloskey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gigerenzer, G" uniqKey="Gigerenzer G">G Gigerenzer</name>
</author>
<author>
<name sortKey="Todd, Pm" uniqKey="Todd P">PM Todd</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mackay, D" uniqKey="Mackay D">D Mackay</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuo, Ad" uniqKey="Kuo A">AD Kuo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cohen, H" uniqKey="Cohen H">H Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jacob, Rg" uniqKey="Jacob R">RG Jacob</name>
</author>
<author>
<name sortKey="Furman, Jm" uniqKey="Furman J">JM Furman</name>
</author>
<author>
<name sortKey="Durrant, Jd" uniqKey="Durrant J">JD Durrant</name>
</author>
<author>
<name sortKey="Turner, Sm" uniqKey="Turner S">SM Turner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beidel, Dc" uniqKey="Beidel D">DC Beidel</name>
</author>
<author>
<name sortKey="Horak, Fb" uniqKey="Horak F">FB Horak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Kooij, H" uniqKey="Van Der Kooij H">H van der Kooij</name>
</author>
<author>
<name sortKey="Jacobs, R" uniqKey="Jacobs R">R Jacobs</name>
</author>
<author>
<name sortKey="Koopman, B" uniqKey="Koopman B">B Koopman</name>
</author>
<author>
<name sortKey="Grootenboer, H" uniqKey="Grootenboer H">H Grootenboer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Kooij, H" uniqKey="Van Der Kooij H">H van der Kooij</name>
</author>
<author>
<name sortKey="Jacobs, R" uniqKey="Jacobs R">R Jacobs</name>
</author>
<author>
<name sortKey="Koopman, B" uniqKey="Koopman B">B Koopman</name>
</author>
<author>
<name sortKey="Van Der Helm, F" uniqKey="Van Der Helm F">F van der Helm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carver, S" uniqKey="Carver S">S Carver</name>
</author>
<author>
<name sortKey="Kiemel, T" uniqKey="Kiemel T">T Kiemel</name>
</author>
<author>
<name sortKey="Van Der Kooij, H" uniqKey="Van Der Kooij H">H van der Kooij</name>
</author>
<author>
<name sortKey="Jeka, Jj" uniqKey="Jeka J">JJ Jeka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Beck, Jm" uniqKey="Beck J">JM Beck</name>
</author>
<author>
<name sortKey="Latham, Pe" uniqKey="Latham P">PE Latham</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Winter, Da" uniqKey="Winter D">DA Winter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bryan, As" uniqKey="Bryan A">AS Bryan</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, S" uniqKey="Liu S">S Liu</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tanahashi, S" uniqKey="Tanahashi S">S Tanahashi</name>
</author>
<author>
<name sortKey="Ujike, H" uniqKey="Ujike H">H Ujike</name>
</author>
<author>
<name sortKey="Kozawa, R" uniqKey="Kozawa R">R Kozawa</name>
</author>
<author>
<name sortKey="Ukai, K" uniqKey="Ukai K">K Ukai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Collins, Jj" uniqKey="Collins J">JJ Collins</name>
</author>
<author>
<name sortKey="De Luca, Cj" uniqKey="De Luca C">CJ De Luca</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS Comput Biol</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">ploscomp</journal-id>
<journal-title-group>
<journal-title>PLoS Computational Biology</journal-title>
</journal-title-group>
<issn pub-type="ppub">1553-734X</issn>
<issn pub-type="epub">1553-7358</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">20174552</article-id>
<article-id pub-id-type="pmc">2824754</article-id>
<article-id pub-id-type="publisher-id">09-PLCB-RA-0870R3</article-id>
<article-id pub-id-type="doi">10.1371/journal.pcbi.1000680</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline">
<subject>Computational Biology/Computational Neuroscience</subject>
<subject>Neuroscience/Behavioral Neuroscience</subject>
<subject>Neuroscience/Cognitive Neuroscience</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Self versus Environment Motion in Postural Control</article-title>
<alt-title alt-title-type="running-head">Bayesian Integration in Postural Control</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Dokka</surname>
<given-names>Kalpana</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kenyon</surname>
<given-names>Robert V.</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Keshner</surname>
<given-names>Emily A.</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kording</surname>
<given-names>Konrad P.</given-names>
</name>
<xref ref-type="aff" rid="aff5">
<sup>5</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Department of Anatomy and Neurobiology, Washington University, Saint Louis, Missouri, United States of America</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Department of Computer Science, University of Illinois at Chicago, Chicago, Illinois, United States of America</addr-line>
</aff>
<aff id="aff3">
<label>3</label>
<addr-line>Department of Physical Therapy, Temple University, Philadelphia, Pennsylvania, United States of America</addr-line>
</aff>
<aff id="aff4">
<label>4</label>
<addr-line>Department of Electrical and Computer Engineering, Temple University, Philadelphia, Pennsylvania, United States of America</addr-line>
</aff>
<aff id="aff5">
<label>5</label>
<addr-line>Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Chicago, Chicago, Illinois, United States of America</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Diedrichsen</surname>
<given-names>Jörn</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">University College London, United Kingdom</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>kalpana@pcg.wustl.edu</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: KD RVK EAK. Performed the experiments: KD. Analyzed the data: KD RVK EAK KPK. Wrote the paper: KD RVK EAK KPK.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<month>2</month>
<year>2010</year>
</pub-date>
<pmc-comment> Fake ppub added to accomodate plos workflow change from 03/2008 and 03/2009 </pmc-comment>
<pub-date pub-type="ppub">
<month>2</month>
<year>2010</year>
</pub-date>
<pub-date pub-type="epub">
<day>19</day>
<month>2</month>
<year>2010</year>
</pub-date>
<volume>6</volume>
<issue>2</issue>
<elocation-id>e1000680</elocation-id>
<history>
<date date-type="received">
<day>23</day>
<month>7</month>
<year>2009</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>1</month>
<year>2010</year>
</date>
</history>
<permissions>
<copyright-statement>Dokka et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
</permissions>
<abstract>
<p>To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.</p>
</abstract>
<abstract abstract-type="summary">
<title>Author Summary</title>
<p>Visual cues typically provide ambiguous information about the orientation of our body in space. When we perceive relative motion between ourselves and the environment, it could have been caused by our movement within the environment, or the movement of the environment around us, or the simultaneous movements of both our body and the environment. The nervous system must resolve this ambiguity for efficient control of our body posture during stance. Here, we show that the nervous system could solve this problem by optimally combining visual signals with physical motion cues. Sensory ambiguity is a central problem during cue combination. Our results thus have implications on how the nervous system could resolve sensory ambiguity in other cue combination tasks.</p>
</abstract>
<counts>
<page-count count="7"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Our visual system senses the movement of objects relative to ourselves. Barring contextual information, a car approaching us rapidly while we stand still may produce the same visual motion cues as if we and the car were approaching each other. The nervous system thus needs to deal with this problem of ambiguity which will be reflected in the way we control our body posture
<xref ref-type="bibr" rid="pcbi.1000680-Fushiki1">[1]</xref>
<xref ref-type="bibr" rid="pcbi.1000680-Keshner1">[3]</xref>
. Consequently, neuroscientists have extensively studied such situations. In such studies, a subject typically stands in front of a visual display and postural reactions to varied movements of the displayed visual scene are measured
<xref ref-type="bibr" rid="pcbi.1000680-Peterka1">[4]</xref>
<xref ref-type="bibr" rid="pcbi.1000680-Lishman1">[11]</xref>
. Even in the absence of direct physical perturbations, subjects actively produce compensatory body movements in response to the movement of the visual scene. This indicates that subjects attribute part of the visual motion to their own body while they resolve the ambiguity in visual stimuli.</p>
<p>Here we constructed a Bayesian attribution model (
<xref ref-type="fig" rid="pcbi-1000680-g001">Fig. 1A</xref>
) to examine how the nervous system may solve this problem of sensory ambiguity. This model shows that optimal solutions will generally take on the form of power laws. We found that the results from experiments with both healthy subjects and patients suffering from vestibular deficits are well fit by power laws. The nervous system thus appears to combine visual and physical motion cues to estimate our body movement for the control of posture in a fashion that is close to optimal.</p>
<fig id="pcbi-1000680-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1000680.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Sensory ambiguity influences postural behavior.</title>
<p>(A) Graphical model, a compact way of describing the assumptions made by a Bayesian model.
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e001.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the velocity of body motion, while
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e002.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the velocity of the environment motion.
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e003.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represents a noisy estimate of the body velocity that is sensed by kinesthetic and vestibular signals.
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e004.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represents the visually perceived velocity of the relative motion between the body and the environment. The attribution model estimates
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e005.jpg" mimetype="image"></inline-graphic>
</inline-formula>
from these perceived cues. (B) Distribution of body velocities during unperturbed stance averaged across subjects tested in our experiment (C) Experimental data and model fits for healthy subjects tested in our experiment.</p>
</caption>
<graphic xlink:href="pcbi.1000680.g001"></graphic>
</fig>
</sec>
<sec id="s2">
<title>Results</title>
<p>To test our Bayesian attribution model, we considered data from two published experiments with healthy subjects
<xref ref-type="bibr" rid="pcbi.1000680-Peterka1">[4]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Mergner1">[5]</xref>
as well as a new experiment we performed to cover the range of visual scene velocities that are relevant to the model predictions. Any purely linear model, for example a Kalman controller, predicts that the gain of the postural response, which is the influence of visual scene motion on the amplitude of postural reactions, remains constant. For these datasets, however, the gain of the postural response decreased with increasing velocities of visual scene motion (
<xref ref-type="fig" rid="pcbi-1000680-g001">Fig. 1C</xref>
,
<xref ref-type="fig" rid="pcbi-1000680-g002">2A and 2C</xref>
; slope = −0.78±0.15 s.d. across datasets,
<italic>p</italic>
<0.005). At low velocities, the gain was close to one which would be expected if the nervous system viewed the body as the sole source of the visually perceived motion. At higher velocities though, the gain decreased which would be expected if the nervous system no longer attributed all of the visually perceived motion to the body. The nervous system thus does not appear to simply assume that visually perceived motion can be fully attributable to the body.</p>
<fig id="pcbi-1000680-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1000680.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Gain of the postural response of healthy subjects and patients with vestibular loss.</title>
<p>(A) and (B) represent the experimental data and model fits of healthy subjects and patients tested in ref. 5 (Mergner et al. 2005). (C) and (D) represent the experimental data and model fits of healthy subjects and patients tested in ref. 4 (Peterka and Benolken 1995).</p>
</caption>
<graphic xlink:href="pcbi.1000680.g002"></graphic>
</fig>
<p>To explain this nonlinear influence of visual scene velocity on the postural response, we constructed a model that describes how the nervous system could solve the problem of sensory ambiguity (
<xref ref-type="fig" rid="pcbi-1000680-g001">Fig. 1A</xref>
). The nervous system can combine visual cues with physical motion cues, such as vestibular and kinesthetic inputs, to estimate our body movement
<xref ref-type="bibr" rid="pcbi.1000680-Wright1">[12]</xref>
<xref ref-type="bibr" rid="pcbi.1000680-Blumle1">[16]</xref>
. However, our sensory information is not perfect and recent studies have emphasized the importance of uncertainty in such cue combination problems
<xref ref-type="bibr" rid="pcbi.1000680-Landy1">[17]</xref>
<xref ref-type="bibr" rid="pcbi.1000680-Ernst1">[19]</xref>
. Visual information has little noise when compared with physical motion cues
<xref ref-type="bibr" rid="pcbi.1000680-vanBeers1">[20]</xref>
. However, it is ambiguous as it does not directly reveal if the body, the environment or both are the source of the visually perceived movement. In comparison to visual cues, physical motion cues are typically more noisy but they are not characterized by the same kind of ambiguity. For these reasons, the nervous system can never be certain about the velocity of the body movement, but can at best estimate it using principles of optimal Bayesian calculations
<xref ref-type="bibr" rid="pcbi.1000680-Ernst2">[21]</xref>
<xref ref-type="bibr" rid="pcbi.1000680-Hartung1">[25]</xref>
. To solve the ambiguity problem, the model estimated the velocity of body's movement for which the perceived visual and physical motion cues were most likely.</p>
<p>Such estimation is only possible if the nervous system has additional information about two factors: typical movements in the environment and typical uncertainty about body movements
<xref ref-type="bibr" rid="pcbi.1000680-Kording1">[26]</xref>
. For example, if a car sometimes moves fast and our body typically moves slowly, then the nervous system would naturally attribute fast movement to the car and slow movement to our body. Indeed, recent research has indicated that human subjects use the fact that slow rather than fast movements are more frequent in the environment when they estimate velocities of moving visual objects
<xref ref-type="bibr" rid="pcbi.1000680-Simoncelli1">[27]</xref>
<xref ref-type="bibr" rid="pcbi.1000680-Weiss1">[30]</xref>
. This distribution, used by human subjects, is called a prior. Following these studies our model used a sparse prior for movements in the visual environment, that is a prior which assigns high probability to slower movements in the environment and low probability to faster movements in the environment
<xref ref-type="bibr" rid="pcbi.1000680-Stocker2">[29]</xref>
.</p>
<p>We wanted to estimate the form of the prior over body movements from our experimental data. We found that when subjects maintained an upright body posture while viewing a stationary visual scene, the distribution of their body velocity was best described by a Gaussian (
<xref ref-type="fig" rid="pcbi-1000680-g001">Fig. 1B</xref>
). Therefore, we used a Gaussian to represent the prior over body velocity.</p>
<p>The attribution model derives from five assumptions. We assume the above sparse prior over movements in the environment
<xref ref-type="bibr" rid="pcbi.1000680-Stocker2">[29]</xref>
. We assume that for the movement of visual environment that is vivid and has high contrast, visual cues provide an estimate of relative movement that has vanishing uncertainty. We assume a Gaussian for the prior over body movement (see
<xref ref-type="sec" rid="s4">Methods</xref>
for details). We also assume a Gaussian for the likelihood of the physical motion cues which indicate that the body is not actually moving and is close to the upright position. Lastly we assume that visual scene velocities are large in comparison to the uncertainty in our detection of our body movements
<xref ref-type="bibr" rid="pcbi.1000680-Fitzpatrick1">[31]</xref>
. Under these assumptions, we can analytically derive that the best solution has a gain that varies as a power law with the visual scene velocity (see
<xref ref-type="sec" rid="s4">Methods</xref>
for details). We thus obtain a compact, two parameter model that predicts the influence of visual perturbations on the estimates of body movement.</p>
<p>Our attribution model calculates how the nervous system should combine information from visual and physical senses to optimally estimate the velocity of body movement. However, the nervous system does not need to solve its problems in an optimal way, but may use simple heuristics
<xref ref-type="bibr" rid="pcbi.1000680-Gigerenzer1">[32]</xref>
. We thus proceeded to compare the attribution model with other models in its ability to explain the decrease in the gain of postural reactions. For this purpose, we compared models using the Bayesian Information Criterion (BIC) which is a technique that allows the comparison of models with different numbers of free parameters
<xref ref-type="bibr" rid="pcbi.1000680-Mackay1">[33]</xref>
. For the gains observed in our experiment (
<xref ref-type="fig" rid="pcbi-1000680-g001">Fig. 1C</xref>
), the Bayesian model had a BIC of −7.5±1.84 (mean BIC±s.e.m. across subjects). We found that a linear model that predicted constant gain of postural reactions could not explain the observed results (BIC = 1.08±0.59,
<italic>p</italic>
<0.001, paired
<italic>t</italic>
-test between BIC values).</p>
<p>We then considered a model in which the amplitude of postural response increased logarithmically up to a threshold stimulus velocity and then saturated. This model predicted the response gains observed at higher scene velocities more poorly than the attribution model (BIC = −3.45±0.99,
<italic>p</italic>
<0.05). We also tested another model in which the gain was initially constant but decreased monotonically with increasing visual scene velocities. This model did worse at predicting the gain than the Bayesian model (BIC = 5.82±0.04,
<italic>p</italic>
<0.001). Thus, the Bayesian model that estimated the velocity of the body movement best fit the available data.</p>
<p>Another way of applying the attribution model is to human behavior in disease states. Patients with bilateral vestibular loss have vestibular cues of inferior quality
<xref ref-type="bibr" rid="pcbi.1000680-Kuo1">[34]</xref>
. The attribution model suggests that these patients' postural behavior would be based more strongly on visual feedback and that their gain should decrease less steeply as a function of stimulus velocity. Indeed, patients tested in previous studies
<xref ref-type="bibr" rid="pcbi.1000680-Peterka1">[4]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Mergner1">[5]</xref>
showed a greater influence of vision on posture and gains that decreased less steeply (
<xref ref-type="fig" rid="pcbi-1000680-g002">Fig. 2B, 2D</xref>
slope = −0.22±0.1 s.d. across datasets,
<italic>p</italic>
<0.005) when compared with healthy subjects, a phenomenon that is well mimicked by the attribution model.</p>
<p>The postural behavior of patients showed marked differences from that of healthy subjects
<xref ref-type="bibr" rid="pcbi.1000680-Peterka1">[4]</xref>
. At low visual scene velocities, patients and healthy subjects had similar gain values. However, at higher scene velocities, patients exhibited larger gains when compared with healthy subjects. If the postural responses in patients were only influenced by elevated noise in the vestibular channels, the gain should vary in a similar manner at all visual scene velocities. That is, the gain of patients should be higher than healthy subjects at all visual scene velocities. However, increased gain of patients only at higher scene velocities alludes to a change in how patients interact with large movements in the visual environment. In our model, the best fit to the data of healthy subjects corresponds to a prior of about
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e006.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, while the fit to the patients' data corresponds to a prior of
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e007.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(see
<xref ref-type="sec" rid="s4">Methods</xref>
for details). It would thus appear that rather than a sparse prior, patients have a prior that is closer to a Gaussian. It is not surprising that patients interact with the extrinsic environment differently from healthy subjects. In fact, such patients can develop space and motion phobia particularly in situations where there is a conflict between visual and vestibular cues and may actively avoid such conflicting environments
<xref ref-type="bibr" rid="pcbi.1000680-Cohen1">[35]</xref>
<xref ref-type="bibr" rid="pcbi.1000680-Beidel1">[37]</xref>
. Our model fits suggest that patients may seek out environments that are devoid of fast movement of large field stimuli. This is a prediction that can be tested in future research, for example by equipping patients with telemetric devices with cameras that record velocities in their environment.</p>
</sec>
<sec id="s3">
<title>Discussion</title>
<p>When we visually perceive displacement between ourselves and the environment, it may be caused by the movement of our body, movement of the environment, or both. In this paper, we have presented a model that formalizes how the nervous system could solve the problems of both ambiguity (self vs environment) and noise in perceived sensory cues. We suggest that the nervous system could solve these problems by estimating the movement of the body as per the principles of Bayesian calculations. We found that the model can account for the gain of postural responses when both healthy subjects and patients with vestibular loss viewed movement of a visual scene at various velocities. Importantly, our model predicts a simple functional form, power laws, as the best cue combination strategy. This makes it easy to test predictions without having to implement complicated estimation procedures.</p>
<p>Postural stabilization during stance is a two-step process comprised of estimation and control and in this paper we have only focused on estimation. Computational models in the past have examined how the nervous system implements this two-step process and have explained a wide range of data
<xref ref-type="bibr" rid="pcbi.1000680-Mergner1">[5]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Peterka2">[8]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Kuo1">[34]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-vanderKooij1">[38]</xref>
<xref ref-type="bibr" rid="pcbi.1000680-Carver1">[40]</xref>
. In these models, cue combination was implemented as a change in the sensory weights
<xref ref-type="bibr" rid="pcbi.1000680-Peterka2">[8]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Oie1">[14]</xref>
and incorporation of nonlinear elements
<xref ref-type="bibr" rid="pcbi.1000680-Mergner1">[5]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Kuo1">[34]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-vanderKooij2">[39]</xref>
. The control aspect was typically implemented by approximating the human body to a single- or double-link inverted pendulum, linearized about the upright position. These models are powerful tools for describing human behavior as they can describe changes both in amplitude and in phase as stimulus parameters are varied. As current models already largely separate postural control into an estimation part and an estimation-dependent control part, it would be straightforward to combine our estimation system with a dynamical control system.</p>
<p>When the control strategy is linear then any nonlinearity has to come from the estimation stage. If control is nonlinear, then there will be interactions between nonlinearities in estimation and control. Our attribution model exclusively focuses on the source of the nonlinearity inherent in the estimation process. If control is nonlinear then parts of the effects we describe here may be due to nonlinearities in control and parts due to estimation. The influence of the nonlinearity in each could be tested by experiments that decouple estimation from control. Importantly, though past models have assumed nonlinearities in the estimation part of the model
<xref ref-type="bibr" rid="pcbi.1000680-Mergner1">[5]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Peterka2">[8]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Oie1">[14]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Kuo1">[34]</xref>
, we give a systematic reason for why this nonlinearity should exist and why it should have approximately the form that has been assumed in past studies.</p>
<p>To test our model, we used visual scene velocities that were in all likelihood, larger than the uncertainty in our perception of our body sway. Our model analytically demonstrates that for these velocities, the gain is proportional to a power law over the visual scene velocity. This leads us to question how the model would perform over a different range of scene velocities. There could be two possible solutions to this question. Firstly, the nervous system may use power laws to estimate the gain of the postural responses at all visual scene velocities. However, this solution does not make any sense as it would predict infinite gain near zero velocity. Secondly, at very small scene velocities, the nervous system may adopt a strategy different from power laws. We argue in favor of the latter possibility. We predict that at scene velocities that are close to our perceptual threshold of body sway, our attribution model would fail to explain the gain of postural responses. In this situation, the Taylor series expansion that we use can no longer be truncated after the first term and quadratic elements need to be considered (see
<xref ref-type="sec" rid="s4">Methods</xref>
). The attribution model will predict power laws if the prior over visual movements is locally smooth within the range of uncertainty in our perception of body movement.</p>
<p>Ambiguity is a central aspect of various cue combination problems in perception and motor control and here we have characterized its influence on postural control. The success of the attribution model in predicting human behavior suggests that the nervous system may employ simple schemes, such as power laws, to implement the best solution to the problem of sensory ambiguity. While recent research indicates how the nervous system could integrate cues that have Gaussian likelihoods
<xref ref-type="bibr" rid="pcbi.1000680-Ma1">[41]</xref>
or priors
<xref ref-type="bibr" rid="pcbi.1000680-Stocker2">[29]</xref>
, little is known about the way non-Gaussian probability distributions may be represented at the neuronal level. The nonlinearity in cue combination that we observed here raises interesting questions about the underlying neural basis of these computations in the nervous system.</p>
</sec>
<sec id="s4" sec-type="methods">
<title>Methods</title>
<sec id="s4a">
<title>Ethics statement</title>
<p>Ten healthy young adults (age: 20–34 years) participated in our experiment. Subjects had no history of neurological or postural disorders and had normal or corrected-to-normal vision. Subjects were informed about the experimental procedures and informed consent was obtained as per the guidelines of the Institutional Review Board of Northwestern University.</p>
</sec>
<sec id="s4b">
<title>Experimental setup</title>
<p>A computer-generated virtual reality system was used to simulate the movement of the visual environment. Subjects viewed a virtual scene projected via a stereo-capable projector (Electrohome Marquis 8500) onto a 2.6 m×3.2 m back-projection screen. The virtual scene consisted of a 30.5 m wide by 6.1 m high by 30.5 m deep room containing round columns with patterned rugs and painted ceiling. Beyond the virtual scene was a landscape consisting of mountains, meadows, sky and clouds. Subjects were asked to wear liquid crystal stereo shutter glasses (Stereographics, Inc.) which separated the field sequential stereo images into right and left eye images. Reflective markers (Motion Analysis, Inc.) attached to the shutter glasses provided real-time orientation of the head that was used to compute correct perspective and stereo projections for the scene. Consequently, virtual objects retained their true perspective and position in space regardless of the subject's movement.</p>
<p>Subjects stood in front of the visual scene with their feet shoulder-width apart and their arms bent approximately 90° at their elbows. The location of subjects' feet on the support surface was marked; subjects were instructed to stand at the same location at the beginning of each trial. During each trial, subjects were instructed to maintain an upright posture while looking straight ahead at the visual scene. Subjects viewed anterior-posterior sinusoidal oscillation of the scene at 0.2 Hz and 5 peak amplitudes: 1, 3, 25, 100 and 150 cm. The visual scene thus oscillated at peak velocities of 1.2, 3.7, 31, 125 and 188 cm/s, respectively. Subjects viewed each scene velocity once for a period of 60 s in random order. In addition, subjects experienced a control condition in which they viewed the stationary visual scene.</p>
<p>Reflective markers were placed on the shoulder joints and fifth lumbar vertebra. A six infra-red camera (Motion Analysis, Inc.) system was used to record the displacement of the reflective markers at 120 Hz. Displacement data of the markers was low pass filtered using a fourth order Butterworth digital filter with a cutoff at 6 Hz. Trunk displacement, chosen as an indicator of postural response, was calculated using the displacement of the shoulder and spine markers
<xref ref-type="bibr" rid="pcbi.1000680-Winter1">[42]</xref>
. Amplitude of the postural response at the frequency of the visual scene motion, that is 0.2 Hz, was calculated in a manner adopted in neurophysiological studies
<xref ref-type="bibr" rid="pcbi.1000680-Bryan1">[43]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Liu1">[44]</xref>
. A sinusoid of frequency 0.2 Hz was chosen. The amplitude and the phase of this sinusoid were estimated such that the squared error between the trunk displacement and the fitted sinusoid was minimized. The amplitude of the fitted sinusoid thus indicated the amplitude of the postural response at the frequency of the visual scene motion. The gain of the trunk displacement was then computed as the ratio of the amplitude of the fitted sinusoid to the amplitude of visual scene motion.</p>
</sec>
<sec id="s4c">
<title>Bayesian model of ambiguity resolution</title>
<p>We formalize the ambiguity problem encountered by the nervous system with the help of a graphical model (
<xref ref-type="fig" rid="pcbi-1000680-g001">Fig. 1A</xref>
). The visual scene projected on the display sinusoidally oscillates with a velocity
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e008.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, while the velocity of the body movement is
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e009.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e010.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represents a noisy estimate of body velocity that is sensed by vestibular and kinesthetic signals. On the other hand,
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e011.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represents the visually perceived velocity of the relative movement between the body and the environment. Our Bayesian model combines the sensory cues,
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e012.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e013.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, to obtain the best estimate of body velocity,
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e014.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. As the amplitude of postural reactions are influenced by subject's perceived body movement
<xref ref-type="bibr" rid="pcbi.1000680-Thurrell1">[2]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Tanahashi1">[45]</xref>
, we assume that the nervous system produces body movements proportional to the estimated body velocity
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e015.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>Using Bayes' rule we obtain:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e016"></graphic>
<label>(1)</label>
</disp-formula>
</p>
<p>We assume that the visual and physical channels are affected by independent noise. Therefore, we get:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e017"></graphic>
<label>(2)</label>
</disp-formula>
</p>
<p>We estimated the form of the prior over body velocity,
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e018.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, from our data. In our experiment, subjects experienced a control condition where they maintained upright body posture when viewing a stationary visual scene. We computed the average velocity of the trunk displacement across all subjects
<xref ref-type="bibr" rid="pcbi.1000680-Winter1">[42]</xref>
. We then computed a histogram of the body velocity and observed that a Gaussian best described the distribution of body velocity (
<xref ref-type="fig" rid="pcbi-1000680-g001">Fig. 1B</xref>
). We, therefore, assumed that subjects prior over body movements would be represented by a Gaussian. While the actual body movements during unperturbed stance are large, the more relevant information is the underlying uncertainty in our perception of our body sway. The uncertainty in our perception of our body sway is much narrower than the width of the distribution of the actual body velocities seen in
<xref ref-type="fig" rid="pcbi-1000680-g001">Fig. 1B</xref>
<xref ref-type="bibr" rid="pcbi.1000680-Fitzpatrick1">[31]</xref>
. This is because for small body movements during normal stance, the nervous system may not constrain the body even though it is aware that the body has moved away from the upright position
<xref ref-type="bibr" rid="pcbi.1000680-Collins1">[46]</xref>
.</p>
<p>As the likelihood of the physical motion cues,
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e019.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, can also be represented by a Gaussian, we define:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e020"></graphic>
<label>(3)</label>
</disp-formula>
Here
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e021.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represents a Gaussian for the combined prior-and-likelihood with variance
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e022.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>The likelihood of visual motion cues,
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e023.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, is given by:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e024"></graphic>
<label>(4)</label>
</disp-formula>
Humans expect visual objects in their environment to move slowly more often than rapidly. This bias has been interpreted as a prior in a Bayesian system. We therefore use a sparse prior of the functional form
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e025.jpg" mimetype="image"></inline-graphic>
</inline-formula>
<xref ref-type="bibr" rid="pcbi.1000680-Stocker2">[29]</xref>
. As visual cues are precise when compared with other sensory cues, we assume that the variance of the noise in visual channels is negligible. Furthermore, in the experimental situations we model here, movement of the visual display is relatively fast in comparison to the typical uncertainty subjects may have about their body velocity
<xref ref-type="bibr" rid="pcbi.1000680-Fitzpatrick1">[31]</xref>
.</p>
<p>We therefore marginalize over all possible
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e026.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to obtain:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e027"></graphic>
<label>(5)</label>
</disp-formula>
Substituting Equations 3 and 5 in Equation 2, we get:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e028"></graphic>
<label>(6)</label>
</disp-formula>
</p>
<p>In the situations we model here, subjects stood on a stationary support surface. Thus, the physical motion cues indicated that the body was close to the upright position; that is
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e029.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>We therefore get:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e030"></graphic>
<label>(7)</label>
</disp-formula>
</p>
<p>For body movements close to the upright position, we can use a Taylor series expansion and drop elements of order 2 and higher to solve the second exponent term in Equation 7. We thus get:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e031"></graphic>
<label>(8)</label>
</disp-formula>
</p>
<p>Importantly, when visual scene velocities are large in comparison to the typical uncertainty in our perception of our body movements, then the maximum of the (visual) environmental prior is far away. As that is far away and the uncertainty in the perception of body movement is narrow, the approximation that only zero- and first-order terms will be important is well justified.</p>
<p>The resulting estimate represents a Gaussian with a maximum at:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e032"></graphic>
<label>(9)</label>
</disp-formula>
Thus, the best estimate of the body velocity
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e033.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, as long as the environment velocity is large in comparison to the typical uncertainty in our perception of body sway, can be represented as a power law over the environment velocity
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e034.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>
<disp-formula>
<graphic xlink:href="pcbi.1000680.e035"></graphic>
<label>(10)</label>
</disp-formula>
Our model thus has two free parameters:
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e036.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the variance of the noise in prior-and-likelihood of the physical motion cues;
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e037.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, the parameter associated with the prior over environmental velocities.</p>
<p>We fitted the model (Equation 10) to the experimentally measured gain of healthy subjects tested in our experiment. We then fitted the model to the experimentally measured gains of healthy subjects and vestibular-deficient patients tested in previous studies
<xref ref-type="bibr" rid="pcbi.1000680-Peterka1">[4]</xref>
,
<xref ref-type="bibr" rid="pcbi.1000680-Mergner1">[5]</xref>
. We chose the model parameters such that the mean squared error between the model fits and the experimental data was minimized.</p>
<p>For healthy subjects, the values of free parameters were as follows:
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e038.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 0.34 and
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e039.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 1.32 (for subjects tested in our experiment);
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e040.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 0.37 and
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e041.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 1.28 (for subjects tested by Peterka et al.);
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e042.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 0.33 and
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e043.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 1.03 (for subjects tested by Mergner et al.). For vestibular-deficient patients, the values of free parameters were as follows:
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e044.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 0.46 and
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e045.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 1.7 (for patients tested by Peterka et al.);
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e046.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 0.524 and
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e047.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 1.85 (for patients tested by Mergner et al.).</p>
</sec>
<sec id="s4d">
<title>Model comparisons</title>
<p>To test the performance of our attribution model, we compared it with other simple models of postural control.</p>
<p>We first considered a linear model in which the gain of postural response was constant (
<xref ref-type="fig" rid="pcbi-1000680-g003">Fig. 3A</xref>
). This model had a single free parameter, the gain
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e048.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and had a functional form:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e049"></graphic>
</disp-formula>
</p>
<p>We then developed a nonlinear model that incorporated the findings of published empirical and modeling studies. The amplitude of postural reaction is known to increase logarithmically with the visual scene velocity until it saturates
<xref ref-type="bibr" rid="pcbi.1000680-Peterka1">[4]</xref>
. We tested a model of the functional form (
<xref ref-type="fig" rid="pcbi-1000680-g003">Fig. 3B</xref>
):
<disp-formula>
<graphic xlink:href="pcbi.1000680.e050"></graphic>
</disp-formula>
<disp-formula>
<graphic xlink:href="pcbi.1000680.e051"></graphic>
</disp-formula>
Here
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e052.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represents the visual scene velocity at which saturation occurs. We chose
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e053.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 2.8 cm/s based on the previous findings in the literature
<xref ref-type="bibr" rid="pcbi.1000680-Mergner1">[5]</xref>
. This model had a single free parameter, the slope,
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e054.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<fig id="pcbi-1000680-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1000680.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Alternative models of postural control.</title>
<p>(A) Model that predicts constant gain of the postural response. (B) Model where amplitude of the postural response increases logarithmically with visual scene velocity and then saturates. (C) Model where the gain is initially constant and then monotonically decreases with the visual scene velocity. Here
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e055.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represents the visual scene velocity,
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e056.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represents the true body velocity and
<inline-formula>
<inline-graphic xlink:href="pcbi.1000680.e057.jpg" mimetype="image"></inline-graphic>
</inline-formula>
indicates the estimate of the body velocity calculated by the models.</p>
</caption>
<graphic xlink:href="pcbi.1000680.g003"></graphic>
</fig>
<p>We considered another model where the gain of postural reactions is initially constant, but decreases monotonically with increasing visual scene velocities (
<xref ref-type="fig" rid="pcbi-1000680-g003">Fig. 3C</xref>
). This model, with three free parameters, has the functional form:
<disp-formula>
<graphic xlink:href="pcbi.1000680.e058"></graphic>
</disp-formula>
<disp-formula>
<graphic xlink:href="pcbi.1000680.e059"></graphic>
</disp-formula>
</p>
<p>We fitted these models to the gain values of each subject tested in our experiment. We computed the Bayesian Information Criterion for each subject and for each model. We then performed a paired
<italic>t</italic>
-test to determine if there was a significant difference in the BIC values for different models.</p>
</sec>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="pcbi.1000680-Fushiki1">
<label>1</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fushiki</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Kobayashi</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Asai</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Watanabe</surname>
<given-names>Y</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Influence of visually induced self-motion on postural stability.</article-title>
<source>Acta Otolaryngol</source>
<volume>125</volume>
<fpage>60</fpage>
<lpage>64</lpage>
<pub-id pub-id-type="pmid">15799576</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Thurrell1">
<label>2</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thurrell</surname>
<given-names>AE</given-names>
</name>
<name>
<surname>Bronstein</surname>
<given-names>AM</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Vection increases the magnitude and accuracy of visually evoked postural responses.</article-title>
<source>Exp Brain Res</source>
<volume>147</volume>
<fpage>558</fpage>
<lpage>560</lpage>
<pub-id pub-id-type="pmid">12444489</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Keshner1">
<label>3</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Keshner</surname>
<given-names>EA</given-names>
</name>
<name>
<surname>Dokka</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kenyon</surname>
<given-names>RV</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Influences of the perception of self-motion on postural parameters.</article-title>
<source>Cyberpsychol Behav</source>
<volume>9</volume>
<fpage>163</fpage>
<lpage>166</lpage>
<pub-id pub-id-type="pmid">16640471</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Peterka1">
<label>4</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peterka</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Benolken</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Role of somatosensory and vestibular cues in attenuating visually induced human postural sway.</article-title>
<source>Exp Brain Res</source>
<volume>105</volume>
<fpage>101</fpage>
<lpage>110</lpage>
<pub-id pub-id-type="pmid">7589307</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Mergner1">
<label>5</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mergner</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Schweigart</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Maurer</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Blumle</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Human postural responses to motion of real and virtual visual environments under different support base conditions.</article-title>
<source>Exp Brain Res</source>
<volume>167</volume>
<fpage>535</fpage>
<lpage>556</lpage>
<pub-id pub-id-type="pmid">16132969</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Dokka1">
<label>6</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dokka</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kenyon</surname>
<given-names>RV</given-names>
</name>
<name>
<surname>Keshner</surname>
<given-names>EA</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Influence of visual scene velocity on segmental kinematics during stance.</article-title>
<source>Gait Posture</source>
<volume>30</volume>
<fpage>211</fpage>
<lpage>216</lpage>
<pub-id pub-id-type="pmid">19505827</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Keshner2">
<label>7</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Keshner</surname>
<given-names>EA</given-names>
</name>
<name>
<surname>Kenyon</surname>
<given-names>RV</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>The influence of an immersive virtual environment on the segmental organization of postural stabilizing responses.</article-title>
<source>J Vestib Res</source>
<volume>10</volume>
<fpage>207</fpage>
<lpage>219</lpage>
<pub-id pub-id-type="pmid">11354434</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Peterka2">
<label>8</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peterka</surname>
<given-names>RJ</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Sensorimotor integration in human postural control.</article-title>
<source>J Neurophysiol</source>
<volume>88</volume>
<fpage>1097</fpage>
<lpage>1118</lpage>
<pub-id pub-id-type="pmid">12205132</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Jeka1">
<label>9</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jeka</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Oie</surname>
<given-names>KS</given-names>
</name>
<name>
<surname>Kiemel</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Multisensory information for human postural control: integrating touch and vision.</article-title>
<source>Exp Brain Res</source>
<volume>134</volume>
<fpage>107</fpage>
<lpage>125</lpage>
<pub-id pub-id-type="pmid">11026732</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Lee1">
<label>10</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>DN</given-names>
</name>
<name>
<surname>Lishman</surname>
<given-names>JR</given-names>
</name>
</person-group>
<year>1975</year>
<article-title>Visual proprioceptive control of stance.</article-title>
<source>Journal of Human Movement Studies</source>
<volume>1</volume>
<fpage>87</fpage>
<lpage>95</lpage>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Lishman1">
<label>11</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lishman</surname>
<given-names>JR</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>DN</given-names>
</name>
</person-group>
<year>1973</year>
<article-title>The autonomy of visual kinaesthesis.</article-title>
<source>Perception</source>
<volume>2</volume>
<fpage>287</fpage>
<lpage>294</lpage>
<pub-id pub-id-type="pmid">4546578</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Wright1">
<label>12</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wright</surname>
<given-names>WG</given-names>
</name>
<name>
<surname>DiZio</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Lackner</surname>
<given-names>JR</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Vertical linear self-motion perception during visual and inertial motion: more than weighted summation of sensory inputs.</article-title>
<source>J Vestib Res</source>
<volume>15</volume>
<fpage>185</fpage>
<lpage>195</lpage>
<pub-id pub-id-type="pmid">16286700</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Horak1">
<label>13</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Horak</surname>
<given-names>FB</given-names>
</name>
<name>
<surname>MacPherson</surname>
<given-names>JM</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Postural orientation and equilibrium.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Rowell</surname>
<given-names>LB</given-names>
</name>
<name>
<surname>Sheperd</surname>
<given-names>JT</given-names>
</name>
</person-group>
<source>Exercise: Regulation and Integration of Multiple Systems</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>Oxford University Press</publisher-name>
<fpage>255</fpage>
<lpage>292</lpage>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Oie1">
<label>14</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oie</surname>
<given-names>KS</given-names>
</name>
<name>
<surname>Kiemel</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Jeka</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Multisensory fusion: simultaneous re-weighting of vision and touch for the control of human posture.</article-title>
<source>Brain Res Cogn Brain Res</source>
<volume>14</volume>
<fpage>164</fpage>
<lpage>176</lpage>
<pub-id pub-id-type="pmid">12063140</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Keshner3">
<label>15</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Keshner</surname>
<given-names>EA</given-names>
</name>
<name>
<surname>Kenyon</surname>
<given-names>RV</given-names>
</name>
<name>
<surname>Langston</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Postural responses exhibit multisensory dependencies with discordant visual and support surface motion.</article-title>
<source>J Vestib Res</source>
<volume>14</volume>
<fpage>307</fpage>
<lpage>319</lpage>
<pub-id pub-id-type="pmid">15328445</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Blumle1">
<label>16</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blumle</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Maurer</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Schweigart</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Mergner</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>A cognitive intersensory interaction mechanism in human postural control.</article-title>
<source>Exp Brain Res</source>
<volume>173</volume>
<fpage>357</fpage>
<lpage>363</lpage>
<pub-id pub-id-type="pmid">16491407</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Landy1">
<label>17</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
<name>
<surname>Johnston</surname>
<given-names>EB</given-names>
</name>
<name>
<surname>Young</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Measurement and modeling of depth cue combination: in defense of weak fusion.</article-title>
<source>Vision Res</source>
<volume>35</volume>
<fpage>389</fpage>
<lpage>412</lpage>
<pub-id pub-id-type="pmid">7892735</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Sober1">
<label>18</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sober</surname>
<given-names>SJ</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Flexible strategies for sensory integration during motor planning.</article-title>
<source>Nat Neurosci</source>
<volume>8</volume>
<fpage>490</fpage>
<lpage>497</lpage>
<pub-id pub-id-type="pmid">15793578</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Ernst1">
<label>19</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion.</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-vanBeers1">
<label>20</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Beers</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Sittig</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Denier van der Gon</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>The precision of proprioceptive position sense.</article-title>
<source>Exp Brain Res</source>
<volume>122</volume>
<fpage>367</fpage>
<lpage>377</lpage>
<pub-id pub-id-type="pmid">9827856</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Ernst2">
<label>21</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Bulthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Merging the senses into a robust percept.</article-title>
<source>Trends Cogn Sci</source>
<volume>8</volume>
<fpage>162</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="pmid">15050512</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Knill1">
<label>22</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Robust cue integration: a Bayesian model and evidence from cue-conflict studies with stereoscopic and figure cues to slant.</article-title>
<source>J Vis</source>
<volume>7</volume>
<fpage>5 1</fpage>
<lpage>24</lpage>
<pub-id pub-id-type="pmid">17685801</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Wolpert1">
<label>23</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Ghahramani</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Jordan</surname>
<given-names>MI</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>An internal model for sensorimotor integration.</article-title>
<source>Science</source>
<volume>269</volume>
<fpage>1880</fpage>
<lpage>1882</lpage>
<pub-id pub-id-type="pmid">7569931</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Knill2">
<label>24</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Richards</surname>
<given-names>W</given-names>
</name>
</person-group>
<year>1996</year>
<source>Perception as Bayesian Inference</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>Cambridge University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Hartung1">
<label>25</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hartung</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Schrater</surname>
<given-names>PR</given-names>
</name>
<name>
<surname>Bulthoff</surname>
<given-names>HH</given-names>
</name>
<name>
<surname>Kersten</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Franz</surname>
<given-names>VH</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Is prior knowledge of object geometry used in visually guided reaching?</article-title>
<source>J Vis</source>
<volume>5</volume>
<fpage>504</fpage>
<lpage>514</lpage>
<pub-id pub-id-type="pmid">16097863</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Kording1">
<label>26</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kording</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Bayesian decision theory in sensorimotor control.</article-title>
<source>Trends Cogn Sci</source>
<volume>10</volume>
<fpage>319</fpage>
<lpage>326</lpage>
<pub-id pub-id-type="pmid">16807063</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Simoncelli1">
<label>27</label>
<mixed-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
<name>
<surname>Adelson</surname>
<given-names>EH</given-names>
</name>
<name>
<surname>Heeger</surname>
<given-names>DJ</given-names>
</name>
</person-group>
<year>1991</year>
<article-title>Probability distributions of optic flow.</article-title>
<fpage>310</fpage>
<lpage>315</lpage>
<comment>IEEE Comput Soc Conf Comput Vision and Pattern Recogn</comment>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Stocker1">
<label>28</label>
<mixed-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Stocker</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Sensory adaptation within a Bayesian framework for perception.</article-title>
<comment>NIPS Advances in Neural Information Processing Systems Volume 18</comment>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Stocker2">
<label>29</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stocker</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Noise characteristics and prior expectations in human visual speed perception.</article-title>
<source>Nat Neurosci</source>
<volume>9</volume>
<fpage>578</fpage>
<lpage>585</lpage>
<pub-id pub-id-type="pmid">16547513</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Weiss1">
<label>30</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weiss</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
<name>
<surname>Adelson</surname>
<given-names>EH</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Motion illusions as optimal percepts.</article-title>
<source>Nat Neurosci</source>
<volume>5</volume>
<fpage>598</fpage>
<lpage>604</lpage>
<pub-id pub-id-type="pmid">12021763</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Fitzpatrick1">
<label>31</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fitzpatrick</surname>
<given-names>R</given-names>
</name>
<name>
<surname>McCloskey</surname>
<given-names>DI</given-names>
</name>
</person-group>
<year>1994</year>
<article-title>Proprioceptive, visual and vestibular thresholds for the perception of sway during standing in humans.</article-title>
<source>J Physiol</source>
<volume>478 ( Pt 1)</volume>
<fpage>173</fpage>
<lpage>186</lpage>
<pub-id pub-id-type="pmid">7965833</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Gigerenzer1">
<label>32</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gigerenzer</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Todd</surname>
<given-names>PM</given-names>
</name>
</person-group>
<collab>Research Group ABC</collab>
<year>1999</year>
<source>Simple heuristics that make us smart</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>Oxford University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Mackay1">
<label>33</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Mackay</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2002</year>
<source>Information Theory, Inference, and Learning Algorithms</source>
<publisher-name>Cambridge University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Kuo1">
<label>34</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kuo</surname>
<given-names>AD</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>An optimal state estimation model of sensory integration in human postural balance.</article-title>
<source>J Neural Eng</source>
<volume>2</volume>
<fpage>S235</fpage>
<lpage>249</lpage>
<pub-id pub-id-type="pmid">16135887</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Cohen1">
<label>35</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cohen</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1994</year>
<article-title>Vestibular rehabilitation improves daily life function.</article-title>
<source>Am J Occup Ther</source>
<volume>48</volume>
<fpage>919</fpage>
<lpage>925</lpage>
<pub-id pub-id-type="pmid">7825708</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Jacob1">
<label>36</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jacob</surname>
<given-names>RG</given-names>
</name>
<name>
<surname>Furman</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Durrant</surname>
<given-names>JD</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>SM</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Panic, agoraphobia, and vestibular dysfunction.</article-title>
<source>Am J Psychiatry</source>
<volume>153</volume>
<fpage>503</fpage>
<lpage>512</lpage>
<pub-id pub-id-type="pmid">8599398</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Beidel1">
<label>37</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beidel</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Horak</surname>
<given-names>FB</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Behavior therapy for vestibular rehabilitation.</article-title>
<source>J Anxiety Disord</source>
<volume>15</volume>
<fpage>121</fpage>
<lpage>130</lpage>
<pub-id pub-id-type="pmid">11388355</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-vanderKooij1">
<label>38</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van der Kooij</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Koopman</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Grootenboer</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>A multisensory integration model of human stance control.</article-title>
<source>Biol Cybern</source>
<volume>80</volume>
<fpage>299</fpage>
<lpage>308</lpage>
<pub-id pub-id-type="pmid">10365423</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-vanderKooij2">
<label>39</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van der Kooij</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Koopman</surname>
<given-names>B</given-names>
</name>
<name>
<surname>van der Helm</surname>
<given-names>F</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>An adaptive model of sensory integration in a dynamic environment applied to human stance control.</article-title>
<source>Biol Cybern</source>
<volume>84</volume>
<fpage>103</fpage>
<lpage>115</lpage>
<pub-id pub-id-type="pmid">11205347</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Carver1">
<label>40</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carver</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kiemel</surname>
<given-names>T</given-names>
</name>
<name>
<surname>van der Kooij</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Jeka</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Comparing internal models of the dynamics of the visual environment.</article-title>
<source>Biol Cybern</source>
<volume>92</volume>
<fpage>147</fpage>
<lpage>163</lpage>
<pub-id pub-id-type="pmid">15703940</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Ma1">
<label>41</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Bayesian inference with probabilistic population codes.</article-title>
<source>Nat Neurosci</source>
<volume>9</volume>
<fpage>1432</fpage>
<lpage>1438</lpage>
<pub-id pub-id-type="pmid">17057707</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Winter1">
<label>42</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Winter</surname>
<given-names>DA</given-names>
</name>
</person-group>
<year>2004</year>
<source>Biomechanics and control of human movement</source>
<publisher-name>Wiley Publications</publisher-name>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Bryan1">
<label>43</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bryan</surname>
<given-names>AS</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Optokinetic and vestibular responsiveness in the macaque rostral vestibular and fastigial nuclei.</article-title>
<source>J Neurophysiol</source>
<volume>101</volume>
<fpage>714</fpage>
<lpage>720</lpage>
<pub-id pub-id-type="pmid">19073813</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Liu1">
<label>44</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Vestibular signals in macaque extrastriate visual cortex are functionally appropriate for heading perception.</article-title>
<source>J Neurosci</source>
<volume>29</volume>
<fpage>8936</fpage>
<lpage>8945</lpage>
<pub-id pub-id-type="pmid">19605631</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Tanahashi1">
<label>45</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tanahashi</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ujike</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Kozawa</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Ukai</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Effects of visually simulated roll motion on vection and postural stabilization.</article-title>
<source>J Neuroeng Rehabil</source>
<volume>4</volume>
<fpage>39</fpage>
<pub-id pub-id-type="pmid">17922922</pub-id>
</mixed-citation>
</ref>
<ref id="pcbi.1000680-Collins1">
<label>46</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Collins</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>De Luca</surname>
<given-names>CJ</given-names>
</name>
</person-group>
<year>1993</year>
<article-title>Open-loop and closed-loop control of posture: a random-walk analysis of center-of-pressure trajectories.</article-title>
<source>Exp Brain Res</source>
<volume>95</volume>
<fpage>308</fpage>
<lpage>318</lpage>
<pub-id pub-id-type="pmid">8224055</pub-id>
</mixed-citation>
</ref>
</ref-list>
<fn-group>
<fn fn-type="conflict">
<p>The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>This work was supported by NIH grants RO1 DC05235 (awarded to EAK), 5RO1 NS057814 and 1R01NS063399 (awarded to KPK), the Falk Trust and the Chicago Community Trust (awarded to KPK). The Funding agencies did not have any role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</fn>
</fn-group>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
<region>
<li>Illinois</li>
<li>Missouri (État)</li>
<li>Pennsylvanie</li>
</region>
</list>
<tree>
<country name="États-Unis">
<region name="Missouri (État)">
<name sortKey="Dokka, Kalpana" sort="Dokka, Kalpana" uniqKey="Dokka K" first="Kalpana" last="Dokka">Kalpana Dokka</name>
</region>
<name sortKey="Kenyon, Robert V" sort="Kenyon, Robert V" uniqKey="Kenyon R" first="Robert V." last="Kenyon">Robert V. Kenyon</name>
<name sortKey="Keshner, Emily A" sort="Keshner, Emily A" uniqKey="Keshner E" first="Emily A." last="Keshner">Emily A. Keshner</name>
<name sortKey="Keshner, Emily A" sort="Keshner, Emily A" uniqKey="Keshner E" first="Emily A." last="Keshner">Emily A. Keshner</name>
<name sortKey="Kording, Konrad P" sort="Kording, Konrad P" uniqKey="Kording K" first="Konrad P." last="Kording">Konrad P. Kording</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001455 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 001455 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:2824754
   |texte=   Self versus Environment Motion in Postural Control
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:20174552" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024