Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Multi-Sensory Weights Depend on Contextual Noise in Reference Frame Transformations

Identifieur interne : 001817 ( Ncbi/Merge ); précédent : 001816; suivant : 001818

Multi-Sensory Weights Depend on Contextual Noise in Reference Frame Transformations

Auteurs : Jessica Katherine Burns [Canada] ; Gunnar Blohm [Canada]

Source :

RBID : PMC:3002464

Abstract

During reach planning, we integrate multiple senses to estimate the location of the hand and the target, which is used to generate a movement. Visual and proprioceptive information are combined to determine the location of the hand. The goal of this study was to investigate whether multi-sensory integration is affected by extraretinal signals, such as head roll. It is believed that a coordinate matching transformation is required before vision and proprioception can be combined because proprioceptive and visual sensory reference frames do not generally align. This transformation utilizes extraretinal signals about current head roll position, i.e., to rotate proprioceptive signals into visual coordinates. Since head roll is an estimated sensory signal with noise, this head roll dependency of the reference frame transformation should introduce additional noise to the transformed signal, reducing its reliability and thus its weight in the multi-sensory integration. To investigate the role of noisy reference frame transformations on multi-sensory weighting, we developed a novel probabilistic (Bayesian) multi-sensory integration model (based on Sober and Sabes, 2003) that included explicit (noisy) reference frame transformations. We then performed a reaching experiment to test the model's predictions. To test for head roll dependent multi-sensory integration, we introduced conflicts between viewed and actual hand position and measured reach errors. Reach analysis revealed that eccentric head roll orientations led to an increase of movement variability, consistent with our model. We further found that the weighting of vision and proprioception depended on head roll, which we interpret as being a result of signal dependant noise. Thus, the brain has online knowledge of the statistics of its internal sensory representations. In summary, we show that sensory reliability is used in a context-dependent way to adjust multi-sensory integration weights for reaching.


Url:
DOI: 10.3389/fnhum.2010.00221
PubMed: 21165177
PubMed Central: 3002464

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3002464

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Multi-Sensory Weights Depend on Contextual Noise in Reference Frame Transformations</title>
<author>
<name sortKey="Burns, Jessica Katherine" sort="Burns, Jessica Katherine" uniqKey="Burns J" first="Jessica Katherine" last="Burns">Jessica Katherine Burns</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Centre for Neuroscience Studies, Queen's University</institution>
<country>Kingston, ON, Canada</country>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Canadian Action and Perception Network (CAPnet)</institution>
<country>Canada</country>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Blohm, Gunnar" sort="Blohm, Gunnar" uniqKey="Blohm G" first="Gunnar" last="Blohm">Gunnar Blohm</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Centre for Neuroscience Studies, Queen's University</institution>
<country>Kingston, ON, Canada</country>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Canadian Action and Perception Network (CAPnet)</institution>
<country>Canada</country>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">21165177</idno>
<idno type="pmc">3002464</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3002464</idno>
<idno type="RBID">PMC:3002464</idno>
<idno type="doi">10.3389/fnhum.2010.00221</idno>
<date when="2010">2010</date>
<idno type="wicri:Area/Pmc/Corpus">001E02</idno>
<idno type="wicri:Area/Pmc/Curation">001E02</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001E30</idno>
<idno type="wicri:Area/Ncbi/Merge">001817</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Multi-Sensory Weights Depend on Contextual Noise in Reference Frame Transformations</title>
<author>
<name sortKey="Burns, Jessica Katherine" sort="Burns, Jessica Katherine" uniqKey="Burns J" first="Jessica Katherine" last="Burns">Jessica Katherine Burns</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Centre for Neuroscience Studies, Queen's University</institution>
<country>Kingston, ON, Canada</country>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Canadian Action and Perception Network (CAPnet)</institution>
<country>Canada</country>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Blohm, Gunnar" sort="Blohm, Gunnar" uniqKey="Blohm G" first="Gunnar" last="Blohm">Gunnar Blohm</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Centre for Neuroscience Studies, Queen's University</institution>
<country>Kingston, ON, Canada</country>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Canadian Action and Perception Network (CAPnet)</institution>
<country>Canada</country>
</nlm:aff>
<country xml:lang="fr">Canada</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Human Neuroscience</title>
<idno type="eISSN">1662-5161</idno>
<imprint>
<date when="2010">2010</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>During reach planning, we integrate multiple senses to estimate the location of the hand and the target, which is used to generate a movement. Visual and proprioceptive information are combined to determine the location of the hand. The goal of this study was to investigate whether multi-sensory integration is affected by extraretinal signals, such as head roll. It is believed that a coordinate matching transformation is required before vision and proprioception can be combined because proprioceptive and visual sensory reference frames do not generally align. This transformation utilizes extraretinal signals about current head roll position, i.e., to rotate proprioceptive signals into visual coordinates. Since head roll is an estimated sensory signal with noise, this head roll dependency of the reference frame transformation should introduce additional noise to the transformed signal, reducing its reliability and thus its weight in the multi-sensory integration. To investigate the role of noisy reference frame transformations on multi-sensory weighting, we developed a novel probabilistic (Bayesian) multi-sensory integration model (based on Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
) that included explicit (noisy) reference frame transformations. We then performed a reaching experiment to test the model's predictions. To test for head roll dependent multi-sensory integration, we introduced conflicts between viewed and actual hand position and measured reach errors. Reach analysis revealed that eccentric head roll orientations led to an increase of movement variability, consistent with our model. We further found that the weighting of vision and proprioception depended on head roll, which we interpret as being a result of signal dependant noise. Thus, the brain has online knowledge of the statistics of its internal sensory representations. In summary, we show that sensory reliability is used in a context-dependent way to adjust multi-sensory integration weights for reaching.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Andersen, R A" uniqKey="Andersen R">R. A. Andersen</name>
</author>
<author>
<name sortKey="Mountcastle, V B" uniqKey="Mountcastle V">V. B. Mountcastle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Armstrong, B" uniqKey="Armstrong B">B. Armstrong</name>
</author>
<author>
<name sortKey="Mcnair, P" uniqKey="Mcnair P">P. McNair</name>
</author>
<author>
<name sortKey="Williams, M" uniqKey="Williams M">M. Williams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Atkins, J E" uniqKey="Atkins J">J. E. Atkins</name>
</author>
<author>
<name sortKey="Fiser, J" uniqKey="Fiser J">J. Fiser</name>
</author>
<author>
<name sortKey="Jacobs, R A" uniqKey="Jacobs R">R. A. Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blohm, G" uniqKey="Blohm G">G. Blohm</name>
</author>
<author>
<name sortKey="Crawford, J D" uniqKey="Crawford J">J. D. Crawford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blohm, G" uniqKey="Blohm G">G. Blohm</name>
</author>
<author>
<name sortKey="Keith, G P" uniqKey="Keith G">G. P. Keith</name>
</author>
<author>
<name sortKey="Crawford, J D" uniqKey="Crawford J">J. D. Crawford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bockisch, C J" uniqKey="Bockisch C">C. J. Bockisch</name>
</author>
<author>
<name sortKey="Haslwanter, T" uniqKey="Haslwanter T">T. Haslwanter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buneo, C A" uniqKey="Buneo C">C. A. Buneo</name>
</author>
<author>
<name sortKey="Andersen, R A" uniqKey="Andersen R">R. A. Andersen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buneo, C A" uniqKey="Buneo C">C. A. Buneo</name>
</author>
<author>
<name sortKey="Jarvis, M R" uniqKey="Jarvis M">M. R. Jarvis</name>
</author>
<author>
<name sortKey="Batista, A P" uniqKey="Batista A">A. P. Batista</name>
</author>
<author>
<name sortKey="Andersen, R A" uniqKey="Andersen R">R. A. Andersen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burke, D" uniqKey="Burke D">D. Burke</name>
</author>
<author>
<name sortKey="Hagbarth, K" uniqKey="Hagbarth K">K. Hagbarth</name>
</author>
<author>
<name sortKey="Lofstedt, L" uniqKey="Lofstedt L">L. Lofstedt</name>
</author>
<author>
<name sortKey="Wallin, B G" uniqKey="Wallin B">B. G. Wallin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
<author>
<name sortKey="Marrone, M C" uniqKey="Marrone M">M. C. Marrone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clark, F J" uniqKey="Clark F">F. J. Clark</name>
</author>
<author>
<name sortKey="Burgess, P R" uniqKey="Burgess P">P. R. Burgess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chang, S W C" uniqKey="Chang S">S. W. C. Chang</name>
</author>
<author>
<name sortKey="Papadimitriou, C" uniqKey="Papadimitriou C">C. Papadimitriou</name>
</author>
<author>
<name sortKey="Snyder, L H" uniqKey="Snyder L">L. H. Snyder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cohen, Y E" uniqKey="Cohen Y">Y. E. Cohen</name>
</author>
<author>
<name sortKey="Andersen, R A" uniqKey="Andersen R">R. A. Andersen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cordo, P J" uniqKey="Cordo P">P. J. Cordo</name>
</author>
<author>
<name sortKey="Flores Vieira, C" uniqKey="Flores Vieira C">C. Flores-Vieira</name>
</author>
<author>
<name sortKey="Verschueren, S M P" uniqKey="Verschueren S">S. M. P. Verschueren</name>
</author>
<author>
<name sortKey="Inglis, J T" uniqKey="Inglis J">J. T. Inglis</name>
</author>
<author>
<name sortKey="Gurfinke, V" uniqKey="Gurfinke V">V. Gurfinke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Collewijn, H" uniqKey="Collewijn H">H. Collewijn</name>
</author>
<author>
<name sortKey="Van Der Steen, J" uniqKey="Van Der Steen J">J. Van der Steen</name>
</author>
<author>
<name sortKey="Ferman, L" uniqKey="Ferman L">L. Ferman</name>
</author>
<author>
<name sortKey="Jansen, T C" uniqKey="Jansen T">T. C. Jansen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deneve, S" uniqKey="Deneve S">S. Denève</name>
</author>
<author>
<name sortKey="Latham, P E" uniqKey="Latham P">P.E. Latham</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A. Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dyde, R T" uniqKey="Dyde R">R. T. Dyde</name>
</author>
<author>
<name sortKey="Jenkin, M R" uniqKey="Jenkin M">M. R. Jenkin</name>
</author>
<author>
<name sortKey="Harris, L R" uniqKey="Harris L">L. R. Harris</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Edin, B B" uniqKey="Edin B">B. B. Edin</name>
</author>
<author>
<name sortKey="Vallbo, A B" uniqKey="Vallbo A">A. B. Vallbo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Engel, K C" uniqKey="Engel K">K. C. Engel</name>
</author>
<author>
<name sortKey="Flanders, M" uniqKey="Flanders M">M. Flanders</name>
</author>
<author>
<name sortKey="Soechting, J F" uniqKey="Soechting J">J. F. Soechting</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bulthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Faisal, A A" uniqKey="Faisal A">A. A. Faisal</name>
</author>
<author>
<name sortKey="Selen, L P" uniqKey="Selen L">L. P. Selen</name>
</author>
<author>
<name sortKey="Wolpert, D M" uniqKey="Wolpert D">D. M. Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fernandez, C" uniqKey="Fernandez C">C. Fernandez</name>
</author>
<author>
<name sortKey="Goldberg, J M" uniqKey="Goldberg J">J. M. Goldberg</name>
</author>
<author>
<name sortKey="Abend, W K" uniqKey="Abend W">W. K. Abend</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gandevia, S C" uniqKey="Gandevia S">S. C. Gandevia</name>
</author>
<author>
<name sortKey="Mccloskey, D I" uniqKey="Mccloskey D">D. I. McCloskey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gellman, R S" uniqKey="Gellman R">R. S. Gellman</name>
</author>
<author>
<name sortKey="Fletcher, W A" uniqKey="Fletcher W">W. A. Fletcher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ghahramani, Z" uniqKey="Ghahramani Z">Z. Ghahramani</name>
</author>
<author>
<name sortKey="Wolpert, D M" uniqKey="Wolpert D">D. M. Wolpert</name>
</author>
<author>
<name sortKey="Jordan, M I" uniqKey="Jordan M">M. I. Jordan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goodwin, G M" uniqKey="Goodwin G">G. M. Goodwin</name>
</author>
<author>
<name sortKey="Mccloskey, D I" uniqKey="Mccloskey D">D. I. McCloskey</name>
</author>
<author>
<name sortKey="Mathews, P B C" uniqKey="Mathews P">P. B. C. Mathews</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Green, A M" uniqKey="Green A">A. M. Green</name>
</author>
<author>
<name sortKey="Angelaki, D E" uniqKey="Angelaki D">D. E. Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hagura, N" uniqKey="Hagura N">N. Hagura</name>
</author>
<author>
<name sortKey="Takei, T" uniqKey="Takei T">T. Takei</name>
</author>
<author>
<name sortKey="Hirose, S" uniqKey="Hirose S">S. Hirose</name>
</author>
<author>
<name sortKey="Aramaki, Y" uniqKey="Aramaki Y">Y. Aramaki</name>
</author>
<author>
<name sortKey="Matsumura, M" uniqKey="Matsumura M">M. Matsumura</name>
</author>
<author>
<name sortKey="Sadata, N" uniqKey="Sadata N">N. Sadata</name>
</author>
<author>
<name sortKey="Naito, E" uniqKey="Naito E">E. Naito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haslwanter, T" uniqKey="Haslwanter T">T. Haslwanter</name>
</author>
<author>
<name sortKey="Straumann, D" uniqKey="Straumann D">D. Straumann</name>
</author>
<author>
<name sortKey="Hess, B J" uniqKey="Hess B">B. J. Hess</name>
</author>
<author>
<name sortKey="Henn, V" uniqKey="Henn V">V. Henn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jordan, M I" uniqKey="Jordan M">M. I. Jordan</name>
</author>
<author>
<name sortKey="Flahs, T" uniqKey="Flahs T">T. Flahs</name>
</author>
<author>
<name sortKey="Arnon, Y" uniqKey="Arnon Y">Y. Arnon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jordan, M I" uniqKey="Jordan M">M. I. Jordan</name>
</author>
<author>
<name sortKey="Rumelhart, D E" uniqKey="Rumelhart D">D. E. Rumelhart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D. Kersten</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P. Mamassian</name>
</author>
<author>
<name sortKey="Yuille, A" uniqKey="Yuille A">A. Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, D C" uniqKey="Knill D">D. C. Knill</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A. Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, K P" uniqKey="Kording K">K. P. Körding</name>
</author>
<author>
<name sortKey="Tenenbaum, J B" uniqKey="Tenenbaum J">J. B. Tenenbaum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lacquaniti, F" uniqKey="Lacquaniti F">F. Lacquaniti</name>
</author>
<author>
<name sortKey="Caminiti, R" uniqKey="Caminiti R">R. Caminiti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landy, M S" uniqKey="Landy M">M. S. Landy</name>
</author>
<author>
<name sortKey="Kojima, H" uniqKey="Kojima H">H. Kojima</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landy, M S" uniqKey="Landy M">M. S. Landy</name>
</author>
<author>
<name sortKey="Maloney, L" uniqKey="Maloney L">L. Maloney</name>
</author>
<author>
<name sortKey="Johnston, E B" uniqKey="Johnston E">E. B. Johnston</name>
</author>
<author>
<name sortKey="Young, M" uniqKey="Young M">M. Young</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lechner Steinleitner, S" uniqKey="Lechner Steinleitner S">S. Lechner-Steinleitner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, S" uniqKey="Lee S">S. Lee</name>
</author>
<author>
<name sortKey="Terzoloulos, D" uniqKey="Terzoloulos D">D. Terzoloulos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, W" uniqKey="Li W">W. Li</name>
</author>
<author>
<name sortKey="Matin, L" uniqKey="Matin L">L. Matin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, W J" uniqKey="Ma W">W. J. Ma</name>
</author>
<author>
<name sortKey="Beck, J M" uniqKey="Beck J">J. M. Beck</name>
</author>
<author>
<name sortKey="Latham, P E" uniqKey="Latham P">P. E. Latham</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A. Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcintyre, J" uniqKey="Mcintyre J">J. McIntyre</name>
</author>
<author>
<name sortKey="Stratta, F" uniqKey="Stratta F">F. Stratta</name>
</author>
<author>
<name sortKey="Lacquaniti, F" uniqKey="Lacquaniti F">F. Lacquaniti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcguire, L M" uniqKey="Mcguire L">L. M. McGuire</name>
</author>
<author>
<name sortKey="Sabes, P N" uniqKey="Sabes P">P. N. Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mon Williams, M" uniqKey="Mon Williams M">M. Mon-Williams</name>
</author>
<author>
<name sortKey="Wann, J P" uniqKey="Wann J">J. P. Wann</name>
</author>
<author>
<name sortKey="Jenkinson, M" uniqKey="Jenkinson M">M. Jenkinson</name>
</author>
<author>
<name sortKey="Rushton, K" uniqKey="Rushton K">K. Rushton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nadler, J W" uniqKey="Nadler J">J. W. Nadler</name>
</author>
<author>
<name sortKey="Angelaki, D E" uniqKey="Angelaki D">D. E. Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, G C" uniqKey="Deangelis G">G. C. DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ren, L" uniqKey="Ren L">L. Ren</name>
</author>
<author>
<name sortKey="Blohm, G" uniqKey="Blohm G">G. Blohm</name>
</author>
<author>
<name sortKey="Crawford, J D" uniqKey="Crawford J">J. D. Crawford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ren, L" uniqKey="Ren L">L. Ren</name>
</author>
<author>
<name sortKey="Khan, A Z" uniqKey="Khan A">A. Z. Khan</name>
</author>
<author>
<name sortKey="Blohm, G" uniqKey="Blohm G">G. Blohm</name>
</author>
<author>
<name sortKey="Henriques, D Y P" uniqKey="Henriques D">D. Y. P. Henriques</name>
</author>
<author>
<name sortKey="Sergio, L E" uniqKey="Sergio L">L. E. Sergio</name>
</author>
<author>
<name sortKey="Crawford, J D" uniqKey="Crawford J">J. D. Crawford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rossetti, Y" uniqKey="Rossetti Y">Y. Rossetti</name>
</author>
<author>
<name sortKey="Desmurget, M" uniqKey="Desmurget M">M. Desmurget</name>
</author>
<author>
<name sortKey="Prablanc, C" uniqKey="Prablanc C">C. Prablanc</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sadeghi, S G" uniqKey="Sadeghi S">S. G. Sadeghi</name>
</author>
<author>
<name sortKey="Chacron, M J" uniqKey="Chacron M">M. J. Chacron</name>
</author>
<author>
<name sortKey="Taylor, M C" uniqKey="Taylor M">M. C. Taylor</name>
</author>
<author>
<name sortKey="Cullen, K E" uniqKey="Cullen K">K. E. Cullen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scott, S H" uniqKey="Scott S">S. H. Scott</name>
</author>
<author>
<name sortKey="Loeb, G E" uniqKey="Loeb G">G. E. Loeb</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sober, S J" uniqKey="Sober S">S. J. Sober</name>
</author>
<author>
<name sortKey="Sabes, P N" uniqKey="Sabes P">P. N. Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sober, S J" uniqKey="Sober S">S. J. Sober</name>
</author>
<author>
<name sortKey="Sabes, P N" uniqKey="Sabes P">P. N. Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
<author>
<name sortKey="Meredith, M A" uniqKey="Meredith M">M. A. Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
<author>
<name sortKey="Stanford, T R" uniqKey="Stanford T">T. R. Stanford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tarnutzer, A A" uniqKey="Tarnutzer A">A. A. Tarnutzer</name>
</author>
<author>
<name sortKey="Bockisch, C" uniqKey="Bockisch C">C. Bockisch</name>
</author>
<author>
<name sortKey="Straumann, D" uniqKey="Straumann D">D. Straumann</name>
</author>
<author>
<name sortKey="Olasagasti, I" uniqKey="Olasagasti I">I. Olasagasti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, R J" uniqKey="Van Beers R">R. J. Van Beers</name>
</author>
<author>
<name sortKey="Sittig, A C" uniqKey="Sittig A">A. C. Sittig</name>
</author>
<author>
<name sortKey="Denier Van Der Gon, J J" uniqKey="Denier Van Der Gon J">J. J. Denier Van Der Gon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, R J" uniqKey="Van Beers R">R. J. Van Beers</name>
</author>
<author>
<name sortKey="Wolpert, D M" uniqKey="Wolpert D">D. M. Wolpert</name>
</author>
<author>
<name sortKey="Haggard, P" uniqKey="Haggard P">P. Haggard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beuzekom, A D" uniqKey="Van Beuzekom A">A. D. Van Beuzekom</name>
</author>
<author>
<name sortKey="Van Gisbergen, J A M" uniqKey="Van Gisbergen J">J. A. M. Van Gisbergen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wade, S W" uniqKey="Wade S">S. W. Wade</name>
</author>
<author>
<name sortKey="Curthoys, I S" uniqKey="Curthoys I">I. S. Curthoys</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Hum Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Human Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Research Foundation</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">21165177</article-id>
<article-id pub-id-type="pmc">3002464</article-id>
<article-id pub-id-type="doi">10.3389/fnhum.2010.00221</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Multi-Sensory Weights Depend on Contextual Noise in Reference Frame Transformations</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Burns</surname>
<given-names>Jessica Katherine</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Blohm</surname>
<given-names>Gunnar</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">*</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Centre for Neuroscience Studies, Queen's University</institution>
<country>Kingston, ON, Canada</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Canadian Action and Perception Network (CAPnet)</institution>
<country>Canada</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Francisco Barcelo, University of Illes Balears, Spain</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Konrad Koerding, Northwestern University, USA; Samuel Sober, Emory University, USA</p>
</fn>
<corresp id="fn001">*Correspondence: Gunnar Blohm, Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada K7L 3N6. e-mail:
<email>gunnar.blohm@queensu.ca</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>12</month>
<year>2010</year>
</pub-date>
<pub-date pub-type="collection">
<year>2010</year>
</pub-date>
<volume>4</volume>
<elocation-id>221</elocation-id>
<history>
<date date-type="received">
<day>19</day>
<month>8</month>
<year>2010</year>
</date>
<date date-type="accepted">
<day>04</day>
<month>11</month>
<year>2010</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2010 Blohm and Burns.</copyright-statement>
<copyright-year>2010</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement">
<license-p>This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>During reach planning, we integrate multiple senses to estimate the location of the hand and the target, which is used to generate a movement. Visual and proprioceptive information are combined to determine the location of the hand. The goal of this study was to investigate whether multi-sensory integration is affected by extraretinal signals, such as head roll. It is believed that a coordinate matching transformation is required before vision and proprioception can be combined because proprioceptive and visual sensory reference frames do not generally align. This transformation utilizes extraretinal signals about current head roll position, i.e., to rotate proprioceptive signals into visual coordinates. Since head roll is an estimated sensory signal with noise, this head roll dependency of the reference frame transformation should introduce additional noise to the transformed signal, reducing its reliability and thus its weight in the multi-sensory integration. To investigate the role of noisy reference frame transformations on multi-sensory weighting, we developed a novel probabilistic (Bayesian) multi-sensory integration model (based on Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
) that included explicit (noisy) reference frame transformations. We then performed a reaching experiment to test the model's predictions. To test for head roll dependent multi-sensory integration, we introduced conflicts between viewed and actual hand position and measured reach errors. Reach analysis revealed that eccentric head roll orientations led to an increase of movement variability, consistent with our model. We further found that the weighting of vision and proprioception depended on head roll, which we interpret as being a result of signal dependant noise. Thus, the brain has online knowledge of the statistics of its internal sensory representations. In summary, we show that sensory reliability is used in a context-dependent way to adjust multi-sensory integration weights for reaching.</p>
</abstract>
<kwd-group>
<kwd>reaching</kwd>
<kwd>Bayesian integration</kwd>
<kwd>multi-sensory</kwd>
<kwd>proprioception</kwd>
<kwd>vision</kwd>
<kwd>context</kwd>
<kwd>head roll</kwd>
<kwd>reference frames</kwd>
</kwd-group>
<counts>
<fig-count count="12"></fig-count>
<table-count count="1"></table-count>
<equation-count count="25"></equation-count>
<ref-count count="60"></ref-count>
<page-count count="15"></page-count>
<word-count count="11258"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction">
<title>Introduction</title>
<p>We are constantly presented with a multitude of sensory information about ourselves and our environment. Using multi-sensory integration, our brains combine all available information from each sensory modality (e.g., vision, audition, somato-sensation, etc.) (Landy et al.,
<xref ref-type="bibr" rid="B38">1995</xref>
; Landy and Kojima,
<xref ref-type="bibr" rid="B37">2001</xref>
; Ernst and Bulthoff,
<xref ref-type="bibr" rid="B21">2004</xref>
; Kersten et al.,
<xref ref-type="bibr" rid="B33">2004</xref>
; Stein and Stanford,
<xref ref-type="bibr" rid="B55">2008</xref>
; Burr et al.,
<xref ref-type="bibr" rid="B10">2009</xref>
; Green and Angelaki,
<xref ref-type="bibr" rid="B28">2010</xref>
). Although this tactic seems redundant, considering that the senses often provide similar information, having more than one sensory modality contributing to the representation of ourselves in the environment reduces the chance of processing error (Ghahramani et al.,
<xref ref-type="bibr" rid="B26">1997</xref>
). It becomes especially important when the incoming sensory representations we receive are conflicting. When this occurs, the reliability assigned to each modality determines how much we can trust the information provided. Here we explore how context-dependent sensory-motor transformations affect the modality-specific reliability.</p>
<p>Multi-sensory integration is a process that incorporates sensory information to create the best possible representation of ourselves in the environment. Our brain uses knowledge of how reliable each sensory modality is, and weights the incoming information accordingly (Stein and Meredith,
<xref ref-type="bibr" rid="B54">1993</xref>
; Landy et al.,
<xref ref-type="bibr" rid="B38">1995</xref>
; Atkins et al.,
<xref ref-type="bibr" rid="B3">2001</xref>
; Landy and Kojima,
<xref ref-type="bibr" rid="B37">2001</xref>
; Kersten et al.,
<xref ref-type="bibr" rid="B33">2004</xref>
; Stein and Stanford,
<xref ref-type="bibr" rid="B55">2008</xref>
). Bayesian integration is an approach that assigns these specific weights in a statistically optimal fashion based on how reliable the cues are (Mon-Williams et al.,
<xref ref-type="bibr" rid="B45">1997</xref>
; Ernst and Banks,
<xref ref-type="bibr" rid="B20">2002</xref>
; Knill and Pouget,
<xref ref-type="bibr" rid="B34">2004</xref>
). For example, when trying to figure out where our hand is, we can use both visual and proprioceptive (i.e., sensed) information to determine its location (Van Beers et al.,
<xref ref-type="bibr" rid="B58">2002</xref>
; Ren et al.,
<xref ref-type="bibr" rid="B48">2006</xref>
,
<xref ref-type="bibr" rid="B47">2007</xref>
). When visual information is available it is generally weighted more heavily than proprioceptive information due to the higher spatial accuracy that is associated with it (Hagura et al.,
<xref ref-type="bibr" rid="B29">2007</xref>
).</p>
<p>Previous studies have used reaching tasks to specifically examine how proprioceptive and visual information is weighted and integrated (Van Beers et al.,
<xref ref-type="bibr" rid="B57">1999</xref>
; Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
). When planning a reaching movement, knowledge about target position relative to the starting hand location is required to create a movement vector. This movement vector is then used to calculate how joint angles have to change for the hand to move from the starting location to the target position using inverse kinematics and dynamics (Jordan and Rumelhart,
<xref ref-type="bibr" rid="B32">1992</xref>
; Jordan et al.,
<xref ref-type="bibr" rid="B31">1994</xref>
). The assessment of target position is generally obtained through vision, whereas initial hand position (IHP) can be calculated using both vision and proprioception (Rossetti et al.,
<xref ref-type="bibr" rid="B49">1995</xref>
). Although it is easy to recognize what different sources of information are used to calculate IHP, knowing how this information is weighted and integrated is not.</p>
<p>The problem we are addressing in this manuscript is that visual and proprioceptive information are encoded separately in different coordinate frames. If both of these cues are believed to have the same cause then they can be integrated into a single estimate. However if causality is not certain then the nervous system might treat both signals separately; the degree of causal belief can thus affect multi-sensory integration (Körding and Tenenbaum,
<xref ref-type="bibr" rid="B35">2007</xref>
). An important aspect that has never been considered explicitly is that in order for vision and proprioception to be combined, they must be represented in the same coordinate frame (Buneo et al.,
<xref ref-type="bibr" rid="B8">2002</xref>
). In other words, one set of information will have to be transformed into a representation that matches the other. Such a coordinate transformation between proprioceptive and visual coordinates depends on the orientation of the eyes and head and is potentially quite complex (Blohm and Crawford,
<xref ref-type="bibr" rid="B4">2007</xref>
). The question then becomes, what set of information will be encoded into the other? In reaching, it is thought that this transformation depends on the stage of reach planning. Sober and Sabes (
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
) proposed a dual-comparison hypothesis describing how information from vision and proprioception could be combined during a reaching task. They suggest that visual and proprioceptive signals are combined at two different stages. First, when the movement plan is being determined in visual coordinates; and second, when the visual movement plan is transformed into a motor command (proprioceptive coordinates). The latter requires knowledge of IHP in joint coordinates. They showed that estimating the position of the arm for movement planning relied mostly on visual information, whereas proprioceptive information was more heavily weighted when determining current joint angle configuration to compute the inverse kinematics. The reason why there should be two separate estimates (one in visual and one in proprioceptive coordinates) lies in the mathematical fact that the maximum likelihood estimate is different in both coordinates systems (Koerding and Tenenbaum,
<xref ref-type="bibr" rid="B35">2007</xref>
; McGuire and Sabes,
<xref ref-type="bibr" rid="B44">2009</xref>
). Therefore, having two distinct estimates reduces the overall estimation uncertainty because no additional transformations that might introduce noise are required.</p>
<p>The main hypothesis of this previous work was that the difference in sensory weighting between reference frames arises from the cost of transformation between reference frames. This idea is based on the assumption that any transformation induces noise to the transformed signal. In general, noise can arise from at least two distinct sources, i.e., from variability in the sensory readings and from the stochastic behavior of spike-mediated signal processing in the brain. Adding noise in the reference frame transformation thus increases uncertainty in coordinate alignment (Körding and Tenenbaum,
<xref ref-type="bibr" rid="B35">2007</xref>
) resulting in lower reliability of the transformed signal and therefore lower weighting (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
; McGuire and Sabes,
<xref ref-type="bibr" rid="B44">2009</xref>
). While it seems unlikely that neuronal noise from the stochastic behavior of spike-mediated signal processing changes across experimental conditions (this is believed to be a constant in a given brain area), uncertainty of coordinate alignment should increase with head roll. This is based on the hypothesis that the internal estimates of the head orientation signals themselves would be more variable (noisy) for head orientations away from primary (up-right) positions (Wade and Curthoys,
<xref ref-type="bibr" rid="B60">1997</xref>
; Van Beuzekom and Van Gisbergen,
<xref ref-type="bibr" rid="B59">2000</xref>
; Blohm and Crawford,
<xref ref-type="bibr" rid="B4">2007</xref>
). This variability could be caused by signal-dependent noise in muscle spindle firing rates, or in vestibular neurons signaling head orientation (Lechner-Steinleitner,
<xref ref-type="bibr" rid="B39">1978</xref>
; Scott and Loeb,
<xref ref-type="bibr" rid="B51">1994</xref>
; Cordo et al.,
<xref ref-type="bibr" rid="B14">2002</xref>
; Sadeghi et al.,
<xref ref-type="bibr" rid="B50">2007</xref>
; Faisal et al.,
<xref ref-type="bibr" rid="B22">2008</xref>
).</p>
<p>To evaluate the notion that multi-sensory integration occurs, subjects performed a reaching task where visual and proprioceptive information about hand position differed. We expanded Sober and Sabes (
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
) model into a fully Bayesian model to test how reference frame transformation noise affects multi-sensory integration. To behaviorally test this, we introduced context changes by altering the subject's head roll angle. Again, the rationale was that head roll would affect the reference frame transformations that have to occur during reach planning (Blohm and Crawford,
<xref ref-type="bibr" rid="B4">2007</xref>
) but would not affect the reliability of primary sensory information (i.e., vision or proprioception). Importantly, we hypothesized that larger head roll noise would lead to noisier reference frame transformations, which in turn would render any transformed signal less reliable. Our main goal was to determine the effect of head roll on sensory transformations and its consequences for multi-sensory integration weights.</p>
</sec>
<sec sec-type="materials|methods">
<title>Materials and Methods</title>
<sec>
<title>Participants</title>
<p>Experiments were performed on seven participants between 20 and 24 years of age, all of whom had normal or corrected to normal vision. Participants performed the reaching task with their dominant right hand. All of the participants gave their written informed consent to the experimental conditions that were approved by the Queen's University General Board of Ethics.</p>
</sec>
<sec>
<title>Apparatus</title>
<p>While seated, participants performed a reaching task in an augmented reality setup (Figure
<xref ref-type="fig" rid="F1">1</xref>
A) using a Phantom Haptic Interface 3.0L (Sensable Technologies; Woburn, MA, USA). Their heads were securely positioned using a mounted bite bar that could be adjusted vertically (up/down), tilted forward and backward (head pitch), and rotated left/right (head roll to either shoulder). Subjects viewed stimuli that were projected onto an overhead screen through a semi-mirrored surface (Figure
<xref ref-type="fig" rid="F1">1</xref>
A). Underneath this mirrored surface was an opaque board that prevented the subjects from viewing their hand. In order to track reaching movements, subjects grasped a vertical handle (attached to the Phantom Robot) mounted on an air sled that slid across a horizontal glass surface at elbow height.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Experimental set up and apparatus</bold>
.
<bold>(A)</bold>
Experimental apparatus. Targets were displayed on semi-silvered mirror. Subjects head position was kept in place using a bite bar. Reaches were made below the semi-silvered mirror using the Phantom robot.
<bold>(B)</bold>
A top view of the subject with all possible target positions. Initial hand positions are shown (−25, 0, and 25 mm). Subjects began each trial by aligning the visual cue representing their hand with the center cross, and then continued by reaching to one of six targets that would appear (see text for details).</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g001"></graphic>
</fig>
<p>Eye movements were recorded using electrooculography (EOG), (16-channel Bagnoli EMG system; DELSYS; Boston, MA, USA). Two pairs of electrodes were placed on the face (Blue Sensor M; Ambu; Ballerup, Denmark). The first pair was located on the outer edges of the left and right eyes to measure horizontal eye movements. The second pair was placed above and below one of the subject's eyes to measure vertical eye movements. An additional ground electrode was placed on the first lumbar vertebrae, to record external electrical noise (Dermatrode; American Imex; Irvine, CA, USA).</p>
</sec>
<sec>
<title>Task design</title>
<p>Subjects began each trial by aligning a blue dot (0.5 cm) on the display that represented their unseen hand position with a start position (cross) that was positioned in the center of the display field. A perturbation was introduced such that the visual position of the IHP was constant but the actual IHP of the reach varied among three positions (−25, 0, and 25 mm horizontally with respect to visual start position – VSP). The blue dot representing the hand was only visible when hand position was within 3 cm of the central cross. Once the hand was in this position, one of six peripheral targets (1.0 cm white dots) would randomly appear 250 ms later. The appearance of a target was accompanied by an audio cue. At the same time the center cross turned yellow. Once the subject's hand began to move the hand cursor disappeared. Subjects were instructed to perform rapid reaching movements toward the visual targets while keeping gaze fixated on the center position (cross). Targets were positioned at 10-cm distance from the start position cross at 60, 90, 120, 240, 270, and 300° (see Figure
<xref ref-type="fig" rid="F1">1</xref>
B).</p>
<p>Once the subject's reach crossed the 10-cm target circle, an audio cue would indicate that they successfully completed the reach, and the center cross would disappear. If subjects were too slow at reaching this distance threshold (more than 750 ms after target onset), a different audio cue was played, indicating that the trial was aborted and would have to be repeated. At the end of each reach subjects had to wait 500 ms to return to the start position, an audio cue indicated the end of the trial, and the center cross reappeared. This was to ensure subjects received no visual feedback of their performance. Subjects were instructed to fixate the central start position cross (VSP cross) throughout the trial.</p>
<p>Subjects completed the task at three different head roll positions, −30 (left), 0, and 30° (right) head roll toward the shoulders (mathematical angle convention from behind subject view). Throughout each head roll condition the proprioceptive information about hand position was altered at random trials, 2.5-cm left or right of the visual hand marker. For example, subjects would align the visual circle representing their hand with the start cross, but their actual hand position may be shifted to the right or left, 2.5 cm. Subjects were not aware of the IHP shift when asked after the experiment. We introduced this discrepancy between visual and actual hand position to gain insight into the relative weighting of both signals in the multi-sensory integration process. For each hand offset subjects reached to each target twenty times, and they did this for each head roll. Subjects completed 360 trials at each head position, for a total of 1080 reaches. Head roll was constant within a block of trials.</p>
</sec>
<sec>
<title>Data analysis</title>
<p>Eye and hand movements were monitored online at a sampling rate of 1000 Hz (16-channel Bagnoli EMG system, Delsys; Boston, MA, USA; Phantom Haptic Interface 3.0L; Sensable Technologies; Woburn, MA, USA). Offline analyses were performed using Matlab (The Mathworks, Natick, MA, USA). Arm position data was low-pass filtered (autoregressive forward–backward filter, cutoff frequency = 50 Hz) and differentiated twice (central difference algorithm) to obtain hand velocity and acceleration (Figure
<xref ref-type="fig" rid="F2">2</xref>
). Each trial was visually inspected to ensure that eye movements did not occur while the target was presented (Figure
<xref ref-type="fig" rid="F2">2</xref>
C). If they did occur, the trial was removed from the analysis. Approximately 5% of trials (384 of 7560 trials) were removed due to eye movements. Hand movement onset and offset were identified based on a hand acceleration criterion (500 mm/s
<sup>2</sup>
), and could be adjusted after visual inspection (Figure
<xref ref-type="fig" rid="F2">2</xref>
E). The movement angle was calculated through regression of the data points from the initial hand movement until the hand crossed the 10-cm circle around the IHP cross. Directional movement error was calculated as the difference between overall movement angle and visual target angle.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Typical subject trial</bold>
.
<bold>(A)</bold>
Raw reach data from a typical trial displayed. The viewed required reach (dotted line) begins at the cross (visual start position, VSP), and ends at the target (open circle). The red line represents the subjects actual hand position. The subject starts this reach with an initial hand position (IHP) offset to the right by 25 mm.
<bold>(B)</bold>
Target onset and display. Timing of the trial begins when the subject aligns the hand cursor with the visual start position. The target then appears, and remains on until the end of the trial. Movement onset, as well as offset times are shown by the vertical lines.
<bold>(C)</bold>
Eye movement traces. Horizontal (purple) and vertical (green) eye movement traces, from EOG recordings. Subjects were instructed to keep the eyes fixated on the VSP for the entire length of the trial. Black vertical lines indicate arm movement start and end.
<bold>(D)</bold>
Hand position traces. Horizontal (purple) and vertical (green) hand positions (solid lines) as well as the horizontal and vertical target position (dotted lines) are plotted over time.
<bold>(E)</bold>
Hand velocity traces relative to time.</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g002"></graphic>
</fig>
</sec>
<sec>
<title>Modeling the initial movement direction</title>
<p>The data was fitted to two models, one previously published velocity command model (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
) and a second fully Bayesian model that had processing steps similar to Sober and Sabes (
<xref ref-type="bibr" rid="B52">2003</xref>
). In addition the second, new model includes explicit reference frame transformations and – more importantly – explicit transformations of the sensory noise throughout the model. Explicit noise has previously been use to determine multi-sensory integration weights (McGuire and Sabes,
<xref ref-type="bibr" rid="B44">2009</xref>
); however, they only considered one-dimensional cases (we model the problem in 2D). Furthermore they did not model reference frame transformations explicitly nor model movement variability in the output (nor analyze movement variability in the data). Below, we outline the general working principle of the model; please refer to Appendix 1 for model details.</p>
<p>The purpose of these models was to determine the relative weighting of both vision and proprioception during reach planning, separately for each head roll angle. Sober and Sabes (
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
) proposed that IHP is computed twice, once in visual and once in proprioceptive coordinates (Figure
<xref ref-type="fig" rid="F3">3</xref>
A). In order to determine the IHP in visual coordinates (motor planning stage, left dotted box in Figure
<xref ref-type="fig" rid="F3">3</xref>
A), proprioceptive information about the hand must be transformed into visual coordinates (Figure
<xref ref-type="fig" rid="F3">3</xref>
A, red “T” box) using head orientation information both the visual and the transformed proprioceptive information are weighted based on reliability, and IHP is calculated. This IHP can then be subtracted from the target position to create a desired movement vector (Δ
<italic>x</italic>
). If the hand position is misestimated (due to IHP offset), then there will be an error associated with the desired movement vector.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Multi-sensory integration model</bold>
.
<bold>(A)</bold>
Model for multi-sensory integration for reach planning. In order to successfully complete a reach, hand position estimates have to be calculated in both proprioceptive (right dotted box) and visual coordinates (left dotted box). Initial hand position estimates in visual coordinates are computed by transforming proprioceptive information into visual coordinates (transformation “T”). Visual and proprioceptive information is then weighted and combined (visual weights α; proprioceptive weights1 −α). The same processes take place for proprioceptive IHP estimates; only this time visual information is transformed into proprioceptive coordinates. Subtracting visual IHP from the visual target location, a movement vector can be created. Using inverse kinematics, the movement vector is combined with the calculation of initial joint angles, derived from the IHP in proprioceptive coordinates to create a movement plan based on changes in joint angles.
<bold>(B)</bold>
Spatial arm position (x) can be characterized in terms of two joint angles, deviation from straight ahead (θ
<sub>1</sub>
) and upper arm elevation (θ
<sub>2</sub>
). The arm and forearm lengths are represented by “L” (see text for details).</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g003"></graphic>
</fig>
<p>As a final processing step, this movement vector will undergo a transformation to be represented in a shoulder based reference frame (Figure
<xref ref-type="fig" rid="F3">3</xref>
A,
<italic>T</italic>
<sub>V→P</sub>
box). Initial joint angles are calculated by transforming visual information about hand location into proprioceptive coordinates (Figure
<xref ref-type="fig" rid="F3">3</xref>
A, rightward arrows through red “T” box). This information is weighted along with the proprioceptive information, to calculate IHP in proprioceptive coordinates (right dotted box in Figure
<xref ref-type="fig" rid="F3">3</xref>
A) and is used to create an estimate of initial elbow and shoulder joint angles (θ initial). Using inverse kinematics, a change in joint angles (Δθ), from the initial starting position to the target is calculated based on the desired movement vector. Since the estimate of initial joint angles (θ initial) is needed to compute the inverse kinematics, misestimation of initial joint angles will lead to errors associated with the inverse kinematics, and therefore error in the movement. We wanted to see how changing head roll would affect the weighting of visual and proprioceptive information. As can be seen from Figure
<xref ref-type="fig" rid="F3">3</xref>
A, our model reflects the idea that head orientation affects this transformation. This is because we hypothesize (and hope to demonstrate through our data) that transformations add noise to the transformed signal and that the amount of this noise depends on the amplitude of the head roll angle. Therefore, we predict that head roll has a significant effect on the estimations of IHP, thus changing the multi-sensory integration weights and in turn affecting the accuracy of the movement plan.</p>
</sec>
</sec>
<sec>
<title>Results</title>
<p>To test the model's predictions, we asked participants to perform reaching movements while we varied head roll and dissociated visual and proprioceptive IHPs.</p>
<sec>
<title>General observations</title>
<p>A total of 7560 trials were collected, with 384 trials being excluded due to eye movements. Subjects were unaware of the shifts in IHP. We used reaching errors to determine how subjects weighted visual and proprioceptive information. Reach error (in angular degrees) was computed as the angle between the movement and the visual hand–target vector, where 0° error would mean no deviation from the visual hand–target direction. As a result of the shifts in the actual starting hand locations, a situation was created where the subject received conflicting visual and proprioceptive information (Figure
<xref ref-type="fig" rid="F2">2</xref>
). Based on how the subject responded to this discrepancy, we could determine how information was weighted and integrated.</p>
<p>Figure
<xref ref-type="fig" rid="F4">4</xref>
displays nine sets of raw data reaches from a typical subject, depicting 10 reaches to each target. Every tenth data point is plotted for each reach, i.e., data points are distant in time by 10 ms, allowing the changes in speed to be visually identifiable. The targets are symbolized by black circles, with the visual start position marked by a cross. Each set of reaches is representative of a particular head roll angle (rows) and IHP (columns). One can already observe from these raw traces that this subject weighted visual IHP more than proprioceptive information resulting in a movement path that is approximately parallel to a virtual line between the visual cross and target locations.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Raw reaches from a typical subject</bold>
. Each grouping of reaches corresponds to a particular head orientation (30°, 0°, −30°) and initial hand position (−25, 0, 25 mm). In each block, ten trials are plotted for each target (black dots). Target angles are 60°, 90°, 120°, 240°, 270°, and 300°. The data points for each reach trajectory represent every tenth data point.</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g004"></graphic>
</fig>
<p>To further analyze this behavior, we compared the reach error (in degrees) for each hand offset condition (Figure
<xref ref-type="fig" rid="F5">5</xref>
A). This graph also displays a breakdown of the data for each target angle and shows a shift in reach errors between the different IHPs. The difference in reach errors between the each of the hand offsets indicates that both visual and proprioceptive information were used during reach planning. Figure
<xref ref-type="fig" rid="F5">5</xref>
B shows a fit from Sober and Sabes’ (
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
) previously proposed model to the normalized data from Figure
<xref ref-type="fig" rid="F5">5</xref>
A (see also Appendix 1 for model details). The data from Figure
<xref ref-type="fig" rid="F5">5</xref>
A were normalized to 0 by subtracting the 0 hand offset from the IHPs 25 and −25 mm. Sober and Sabes’ (
<xref ref-type="bibr" rid="B52">2003</xref>
) previously proposed velocity command model fit our data well. In Figure
<xref ref-type="fig" rid="F5">5</xref>
B, it is clear that the normalized data points for each hand position follow the same pattern as the model predicted error, represented by the dotted lines. Based on this close fit of our data to the model, we can now use this model in a first step to investigate how head roll affects the weighting of vision and proprioceptive information about the hand.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Hand offset effects</bold>
.
<bold>(A)</bold>
This graph demonstrates a shift in reach errors between each initial hand position (−25, 0, 25 mm), suggesting that visual and proprioceptive information are both used when reaching.
<bold>(B)</bold>
Model fit. The data from A was normalized to 0, and plotted against the model fit (dotted lines; initial hand position 25 (green) and −25 (red)). The squared points represent the normalized direction error for each initial hand position at each target angle. This graph demonstrates that the model previously proposed by Sober and Sabes (
<xref ref-type="bibr" rid="B52">2003</xref>
) fits our data. Error bars represent standard error of the mean.</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g005"></graphic>
</fig>
</sec>
<sec>
<title>Head roll influences on reach errors</title>
<p>As mentioned before, subjects performed the experiment described above for each head roll condition, i.e. −30, 0, and 30° head roll (to the left shoulder, upright and to the right shoulder respectively). We assumed that if head roll was not taken into account, there would be no difference in the reach errors between the head roll conditions. Alternatively, if head roll was accounted for, then we would expect at least two distinct influences of head roll. First, head roll estimation might not be accurate, which would lead to an erroneous rotation of the visual information into proprioceptive coordinates. This would be reflected in an overall shift of the reach error curve up/downward for eccentric head roll angles compared to the head straight-ahead. Second, head roll estimation might not be very precise, i.e., not very reliable. In that case, variability in the estimation should affect motor planning and thus increase movement variability overall and multi-sensory integration weights in particular. We will test these predictions below. Figure
<xref ref-type="fig" rid="F6">6</xref>
shows differences in reach errors between the different head roll conditions, indicating that head roll was a factor influencing reach performance. This is a novel finding that has never been considered in any previous model.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Head roll effects</bold>
. Reach errors are plotted for each head roll (- 30°, 0°, 30° ) at each target angle (60°, 90°, 120°, 240°, 270°, 300°). The difference in reach error indicates that information about head roll in taken into account. Error bars represent standard error of the mean.</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g006"></graphic>
</fig>
<p>From our model (Figure
<xref ref-type="fig" rid="F3">3</xref>
A), we predicted that as head roll moves away from 0, more noise would be associated with the signal (Scott and Loeb,
<xref ref-type="bibr" rid="B51">1994</xref>
; Blohm and Crawford,
<xref ref-type="bibr" rid="B4">2007</xref>
; Tarnutzer et al.,
<xref ref-type="bibr" rid="B56">2009</xref>
). This increase in noise should affect the overall movement variability (i.e., standard deviation, SD) because more noise in the head roll signal should result in more noise added during the reference frame transformation process. Figure
<xref ref-type="fig" rid="F7">7</xref>
plots movement variability for trials where the head was upright compared to rolled to the left or right combined.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption>
<p>
<bold>Movement variability as a function of head roll</bold>
. For each initial hand position (−25, 0, 25 mm) movement variability (standard deviation) is compared between head straight and head roll conditions 0, ≠0. Reaches when the head is rolled had significantly more variability compared to reaches when the head is straight.</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g007"></graphic>
</fig>
<p>We performed a paired
<italic>t</italic>
-test between head roll and no head roll conditions across all seven subjects and all hand positions (21 standard deviation values per head roll conditions). Across all three IHPs, movement variability was significantly greater when the head was rolled compared to when the head was straight
<italic>t</italic>
(20) = −3.512,
<italic>p</italic>
 < 0.01. This was a first indicator that head roll introduced signal-dependent noise to motor planning, likely through noisier reference frame transformations (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
; Blohm and Crawford,
<xref ref-type="bibr" rid="B4">2007</xref>
).</p>
<p>If changing the head roll angle ultimately affects reach variability, then we would expect that the information associated with the increased noise would be weighted less at the multi-sensory integration step. To test this, we fitted Sober and Sabes’ (
<xref ref-type="bibr" rid="B52">2003</xref>
) model on our data independently for each head orientation. The visual weights, of IHPs represented in visual (dark blue, α
<sub>vis</sub>
) and proprioceptive (light blue, α
<sub>prop</sub>
) coordinates are displayed in Figure
<xref ref-type="fig" rid="F8">8</xref>
A. The visual weights of IHP in visual and proprioceptive coordinates were significantly different when the head was rolled compared to the head straight condition (
<italic>t</italic>
(20) = −4.217,
<italic>p</italic>
< 0.01). Visual information was weighted more heavily when IHP was calculated in visual coordinates compared to proprioceptive coordinates. Furthermore, visual information was weighted significantly more for IHP in visual coordinates for head rolled conditions, compared to head straight. In contrast, visual information was weighted significantly less when the IHP was calculated in proprioceptive coordinates for head rolled conditions compared to head straight. This finding is representative of the fact that information that undergoes a noisy transformation is weighted less due to the noise added by this transformation, e.g., vision is weighted less in proprioceptive as opposed to visual coordinates (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
). An even further reduction of weighting of the transformed signal will occur if head roll is introduced, presumably due to signal dependant noise (see
<xref ref-type="sec" rid="s1">Discussion</xref>
section).</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption>
<p>
<bold>Model fit and rotational biases</bold>
.
<bold>(A)</bold>
The model was fit to the data for each head roll condition. The visual weights for initial hand position estimates are plotted for both visual (dark blue) and proprioceptive (light blue) coordinate frames. Significant differences are denoted by the *(
<italic>p</italic>
 < 0.05).
<bold>(B)</bold>
Rotational biases are plotted and compared for each head roll position (−30°, 0°, 30°).</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g008"></graphic>
</fig>
<p>In addition to accounting for head roll noise, the reference frame transformation also has to estimate the amplitude of the head roll angle. Any misestimation in head roll angle will lead to a rotational movement error. Figure
<xref ref-type="fig" rid="F8">8</xref>
B plots the rotation biases (i.e., the overall rotation in movement direction relative to the visual hand–target vector) for each head roll position. The graph shows that there is a rotational bias for reaching movements even for 0° head roll angle. This bias changes depending on head roll. There were significant differences between the rotational biases for head roll conditions compared to head straight (
<italic>t</italic>
(20) > 6.891,
<italic>p</italic>
 < 0.01).</p>
</sec>
<sec>
<title>Modeling noisy reference frame transformations</title>
<p>We developed a full Bayesian model of multi-sensory integration for reach planning. This model uses proprioceptive and visual IHP estimates and combines them in a statistically optimal way, separately in two different representations (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
): proprioceptive coordinates and visual coordinates (Figure
<xref ref-type="fig" rid="F3">3</xref>
B). The IHP estimate in visual coordinates is compared to target position to compute the desired movement vector while the IHP estimate in proprioceptive coordinates in needed to translate (through inverse kinematics) this desired movement vector into a change of joint angles using a velocity command model. For optimal movement planning, not only are [the point estimates in these two reference] frames are required, but the expected noise in those estimates is also needed (see Appendix).</p>
<p>Compared to previous models (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
), our model includes two crucial additional features. First, we explicitly include the required reference transformations (Figure
<xref ref-type="fig" rid="F3">3</xref>
A, “T”) from proprioceptive to visual coordinates (and vice versa), including the forward/inverse kinematics for transformation between Euclidean space and joint angles as well as for movement generation. The reference frame transformation T depends on an estimate of body geometry, i.e., head roll angle (Figure
<xref ref-type="fig" rid="F3">3</xref>
A, “H”) in our experiment. Second, in addition to modeling the mean behavior, we also include a full description of variability. Visual and proprioceptive sensory information have associated noise, i.e., proprioceptive and visual IHP as well as head roll angle. As a consequence, covariance matrices of all variables also have to undergo the above-mentioned transformations. In addition, these transformations themselves are noisy, i.e., they depend on noisy sensory estimates.</p>
<p>To illustrate how changes in transformation noise, visual noise, and joint angle variability affect predicted reach error, we used the model to simulate these conditions. We did this first to demonstrate that our model can reproduce the general movement error pattern produced by previous models (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
) and second to show how different noise amplitudes in the sensory variables change this error pattern. Figure
<xref ref-type="fig" rid="F9">9</xref>
A displays the differences in predicted error between high, medium and low noise in the reference frame transformation. As the amount of transformation noise increases, the reach error decreases. The transformed signal in both visual and proprioceptive coordinates is weighted less in the presence of higher transformation noise. However, the misestimation of IHP in visual coordinates has a bigger impact on movement error than the IHP estimation in proprioceptive coordinates (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
). As a consequence, the gross effect of higher transformation noise is a decrease in movement error because the proprioceptive information will be weighted relatively less after it is converted into visual coordinates.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption>
<p>
<bold>Model simulations</bold>
. Each graph illustrates how different stimulus parameters affect the predicted reach error for each of the initial hand positions (−25, 0, 25).
<bold>(A)</bold>
Low, medium, and high transformation noise is compared.
<bold>(B)</bold>
Different magnitudes of visual noise affect predicted error in the high transformation noise condition.
<bold>(C)</bold>
Different amounts of noise associated with separate joint angles affects predicted error in the medium transformation noise condition.</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g009"></graphic>
</fig>
<p>Figure
<xref ref-type="fig" rid="F9">9</xref>
B illustrates the effect of visual sensory noise (e.g. in situations such as seen versus remembered stimuli) on predicted error in a high transformation noise condition. When the amount of visual noise increases (visual reliability decreases), proprioceptive information will be weighted more, and predicted error will increase. Conversely, as visual noise decreases (reliability increases), predicted error will decrease as well. Differences seen between different movement directions (forward and backward) are due to an interaction effect of transformations for vector planning (visual coordinates) and movement execution (proprioceptive coordinates).</p>
<p>Not only does visual noise impact the predicted error, but proprioceptive information does as well. Noise associated with different joint angles will result in proprioceptive information being weighted less than visual information, and as a result there will be a decrease in predictive error (Figure
<xref ref-type="fig" rid="F9">9</xref>
C). Figure
<xref ref-type="fig" rid="F9">9</xref>
C displays how changing the amount of noise associated with one joint angle over another can change the predicted error. For example with θ
<sub>1</sub>
 > θ
<sub>2</sub>
, the signals indicating the arm deviations from the straight-ahead position are noisier than the signals indicating upper arm elevation. With this situation, visual error will be smaller when the targets are straight ahead or behind because the proprioceptive signals for the straight-ahead position are noisier and thus will be weighted less.</p>
<p>Figure
<xref ref-type="fig" rid="F10">10</xref>
displays the model fits to the data for both error (top panels) and variability (lower panels) graphs for each IHP (−25, 0, 25 mm), comparing the different head roll effects. The solid lines represent the model fit for each IHP, with the squared nodes representing the behavioral data for each target. The model fits are different for each head roll position, with 0 head roll falling in between the tilted head positions. The model predicts that −30° head roll and 30° head roll would have reach errors in opposite directions; this is consistent with the data. Furthermore, the model presents 0 head roll as having the least variability when reaching towards the visual targets, with the behavioral data following the same trend.</p>
<fig id="F10" position="float">
<label>Figure 10</label>
<caption>
<p>
<bold>Model fit comparing head roll conditions</bold>
. For each initial hand position (−25, 0, 25 mm) both reaching error and variability are plotted. The solid lines represent the model fit for each head roll; −30° (red), 0° (black), and 30° (green). The squared points represent the data sampled at each target angle (60°, 90°, 120°, 240°, 270°, 300°).</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g010"></graphic>
</fig>
<p>In addition to modeling the effect of head roll on error and variability, we plotted the differences for IHP as well. Figure
<xref ref-type="fig" rid="F11">11</xref>
displays both error and variability graphs for each head roll condition (same plots as in Figure
<xref ref-type="fig" rid="F10">10</xref>
, but re-arranged according to head roll conditions). The reach errors for different IHPs changed in a systematic way; however differences in variability between the IHPs are small and show a similar pattern of variability across movement directions.</p>
<fig id="F11" position="float">
<label>Figure 11</label>
<caption>
<p>
<bold>Model fit comparing different initial hand positions</bold>
. For each head roll angle (−30°, 0°, 30°) both reaching error and variability are plotted. The solid lines represent the model fit for each initial hand position; −25 (red), 0 (black), and 25 mm (green). The squared points represent the data sampled at each target angle (60°, 90°, 120°, 240°, 270°, 300°). Data re-arranged from Figure
<xref ref-type="fig" rid="F10">10</xref>
.</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g011"></graphic>
</fig>
<p>Determining how head roll affects multi-sensory weights was the main goal of this experiment. Previously in this section we fitted Sober and Sabes’ (
<xref ref-type="bibr" rid="B52">2003</xref>
) original model to the data, and displayed the visual weights for IHP estimates for both visual and proprioceptive coordinate frames (Figure
<xref ref-type="fig" rid="F8">8</xref>
A). In our model, we did not explicitly fit those weights to the data; however, from the covariance matrices of the sensory signals, we could easily recover the multi-sensory weights (see Appendix 1). Since our model uses two-dimensional covariance matrices (a 2D environment allows a visual coordinate frame to be represented in
<italic>x</italic>
and
<italic>y</italic>
, and proprioceptive coordinates to be displayed by two joint angles), the recovered multi-sensory weights were also 2D matrices. We used the diagonal elements of those weight matrices as visual weights in visual (
<italic>x</italic>
and
<italic>y</italic>
) and proprioceptive (joint angles) coordinates. Figure
<xref ref-type="fig" rid="F12">12</xref>
displays significant differences (
<italic>t</italic>
(299) < −10,
<italic>p</italic>
 < 0.001) for all visual weights between head straight and head rolled conditions, except for θ
<sub>2</sub>
. Visual weights were higher for visual coordinates when the head is rolled. In contrast, visual weights decrease in proprioceptive coordinates when the head is rolled compared to the head straight condition. These results were very similar to the original model fits performed in Figure
<xref ref-type="fig" rid="F8">8</xref>
A. Thus, our model was able to simulate head roll dependent noise in reference frame transformations underlying reach planning and multi-sensory integration. More importantly, our data show that head roll dependent noise can influence multi-sensory integration in a way that is explained through context-dependent changes in added reference frame transformation noise.</p>
<fig id="F12" position="float">
<label>Figure 12</label>
<caption>
<p>
<bold>2D model fit for each head roll</bold>
. The model was fit to the data for each head roll condition. The visual weights for initial hand position estimates are plotted for visual (dark blue) and proprioceptive (light blue) coordinate frames with standard deviations. A 2D environment allows visual
<italic>x</italic>
and
<italic>y</italic>
and proprioceptive θ
<sub>1</sub>
and θ
<sub>2</sub>
to be weighted separately. There were significant differences in visual weighting between head straight and head roll conditions for all coordinate representations except proprioceptive θ
<sub>2</sub>
.</p>
</caption>
<graphic xlink:href="fnhum-04-00221-g012"></graphic>
</fig>
</sec>
</sec>
<sec sec-type="discussion" id="s1">
<title>Discussion</title>
<p>In this study, we analyzed the effect of context-dependent head roll on multi-sensory integration during a reaching task. We found that head roll influenced reach error and variability in a way that could be explained by signal-dependent noise in the coordinate matching transformation between visual and proprioceptive coordinates. To demonstrate this quantitatively, we developed the first integrated model of multi-sensory integration and reference frame transformations in a probabilistic framework. This shows that the brain has online knowledge of the reliability associated with each sensory variable and uses this information to plan motor actions in a statistically optimal fashion (in the Bayesian sense).</p>
<sec>
<title>Experimental findings</title>
<p>When we changed the hand offset, we found reach errors that were similar to previously published data in multi-sensory integration tasks (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
; McGuire and Sabes,
<xref ref-type="bibr" rid="B44">2009</xref>
) and were well described by Sober and Sabes’ (
<xref ref-type="bibr" rid="B52">2003</xref>
) model. In addition we also found changes in the pattern of reach errors across different head orientations. This was a new finding that previous models did not explore. There were multiple effects of head roll on reach errors. First, there was a slight rotational offset for the head straight condition, which could be a result of biomechanical biases, e.g., related to the posture of the arm. In addition, our model-based analysis showed that reach errors shifted with head roll. Our model accounted for this shift by assuming that head roll was over-estimated in the reference frame transformation during the motor planning process. The over-estimation of head roll could be explained by ocular counter-roll. Indeed, when the head is held in a stationary head roll position, ocular counter-roll compensates for a portion of the total head rotation (Collewijn et al.,
<xref ref-type="bibr" rid="B15">1985</xref>
; Haslwanter et al.,
<xref ref-type="bibr" rid="B30">1992</xref>
; Bockisch and Haslwanter,
<xref ref-type="bibr" rid="B6">2001</xref>
). This means that the reference frame transformation has to rotate the retinal image less than the head roll angle. Not taking ocular counter-roll into account (or only partially accounting for it) would thus result in an over-rotation of retinal image, similar to what we observed in our data. However, since we did not measure ocular torsion, we cannot evaluate this hypothesis.</p>
<p>Alternatively, an over-estimation of head roll could in theory be related to the effect of priors in head roll estimation. If for some reason the prior belief of the head angle is that head roll is large, then Bayesian estimation would predict a posterior in head roll estimation that is biased toward larger than actual angles. However, a rationale for such a bias is unclear and would be contrary to priors expecting no head tilt such as reported in the subjective visual verticality perception literature (Dyde et al.,
<xref ref-type="bibr" rid="B17">2006</xref>
).</p>
<p>The second effect of head roll was a change in movement variability. Non-zero head roll angles produced reaches with higher variability compared to reaches during upright head position. This occurred despite the fact that the quality of the sensory input from the eyes and arm did not change. We took this as evidence for head roll influencing the sensory-motor reference frame transformation. Since we assume head roll to have signal-dependent noise (see below), different head roll angles will result in different amounts of noise in the transformation.</p>
<p>Third and most importantly, head roll changed the multi-sensory weights both at the visual and proprioceptive processing stages. This finding was validated independently by fitting Sober and Sabes’ (
<xref ref-type="bibr" rid="B52">2003</xref>
) original model and our new full Bayesian reference frame transformation model to the data. This is evidence that head roll variability changes for different head roll angles and that this signal-dependent noise enters the reference frame transformation and adds to the transformed signal, thus making it less reliable. Therefore, the context of body geometry influences multi-sensory integration through stochastic processes in the involved reference frame transformations.</p>
<p>Signal-dependent head roll noise could arise from multiple sources. Indeed, head orientation can be derived from vestibular signals as well as muscle spindles in the neck. The vestibular system is an essential component for determining head position sense; specifically the otolith organs (utricle and saccule) respond to static head positions in relation to gravitational axes (Fernandez et al.,
<xref ref-type="bibr" rid="B23">1972</xref>
; Sadeghi et al.,
<xref ref-type="bibr" rid="B50">2007</xref>
). We suggest that the noise from the otoliths varies for different head roll orientations; such signal-dependent noise has previously been found in the eye movement system for extraretinal eye position signals (Gellman and Fletcher,
<xref ref-type="bibr" rid="B25">1992</xref>
; Li and Matin,
<xref ref-type="bibr" rid="B41">1992</xref>
). In addition, muscle spindles are found to be the most important component in determining joint position sense (Goodwin et al.,
<xref ref-type="bibr" rid="B27">1972</xref>
; Scott and Loeb,
<xref ref-type="bibr" rid="B51">1994</xref>
), with additional input from cutaneous and joint receptors (Clark and Burgess,
<xref ref-type="bibr" rid="B11">1975</xref>
; Gandevia and McCloskey,
<xref ref-type="bibr" rid="B24">1976</xref>
; Armstrong et al.,
<xref ref-type="bibr" rid="B2">2008</xref>
). Muscles found in the cervical section of the spine contain high densities of muscle spindles, enabling a relatively accurate representation of head position (Armstrong et al.,
<xref ref-type="bibr" rid="B2">2008</xref>
). In essence, as the head moves away from an upright position, more noise should be associated with the signal due to an increase in muscle spindle firing (Burke et al.,
<xref ref-type="bibr" rid="B9">1976</xref>
; Edin and Vallbo,
<xref ref-type="bibr" rid="B18">1990</xref>
; Scott and Loeb,
<xref ref-type="bibr" rid="B51">1994</xref>
; Cordo et al.,
<xref ref-type="bibr" rid="B14">2002</xref>
). However, due to the complex neck muscle arrangement, a detailed biomechanical model of the neck (Lee and Terzoloulos,
<xref ref-type="bibr" rid="B40">2006</xref>
) would be needed to corroborate this claim.</p>
</sec>
<sec>
<title>Model discussion</title>
<p>We have shown that noise affects the way reference frame transformations are performed in that transformed signals have increased variability. A similar observation has previously been made for eye movements (Li and Matin,
<xref ref-type="bibr" rid="B41">1992</xref>
; Gellman and Fletcher,
<xref ref-type="bibr" rid="B25">1992</xref>
) and visually guided reaching (Blohm and Crawford,
<xref ref-type="bibr" rid="B4">2007</xref>
). This validates a previous suggestion that any transformation of signals in the brain has a cost of added noise (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
). Therefore, the optimal way for the brain to process information would be to minimize the number of serial computational (or transformational) stages. The latter point might be the reason why multi-sensory comparisons could occur fewer times but in parallel at different stages in the processing hierarchy and in different coordinate systems (Körding and Tenenbaum,
<xref ref-type="bibr" rid="B35">2007</xref>
).</p>
<p>It has been suggested that in cases of virtual reality experiments, the visual cursor used to represent the hand could be considered as a tool attached to the hand (Körding and Tenenbaum,
<xref ref-type="bibr" rid="B35">2007</xref>
). As a consequence, there is additional uncertainty as to the tool length. This uncertainty adds to the overall uncertainty of the visual signals. We have not modeled this separately, as tool-specific uncertainty would simply add to the actual visual uncertainty (the variances add up). However, the estimated location of the cursor tool itself could be biased toward the hand; an effect that would influence the multi-sensory integration weights but that we cannot discriminate from our data.</p>
<p>In our model, multi-sensory integration occurred in specific reference frames, i.e. in visual and proprioceptive coordinates. Underlying this multiple comparison hypothesis is the belief that signals can only be combined if they are represented in the same reference frame (Lacquaniti and Caminiti,
<xref ref-type="bibr" rid="B36">1998</xref>
; Cohen and Andersen,
<xref ref-type="bibr" rid="B13">2002</xref>
; Engel et al.,
<xref ref-type="bibr" rid="B19">2002</xref>
; Buneo and Andersen,
<xref ref-type="bibr" rid="B7">2006</xref>
; McGuire and Sabes,
<xref ref-type="bibr" rid="B44">2009</xref>
). However, this claim has never been explicitly verified and this may not be the way neurons in the brain actually carry out multi-sensory integration. The brain could directly combine different signals across reference frames in largely parallel neural ensembles (Denève et al.,
<xref ref-type="bibr" rid="B16">2001</xref>
; Blohm et al.,
<xref ref-type="bibr" rid="B5">2009</xref>
), for example using gain modulation mechanisms (Andersen and Mountcastle,
<xref ref-type="bibr" rid="B1">1983</xref>
; Chang et al.,
<xref ref-type="bibr" rid="B12">2009</xref>
). Regardless of the way the brain integrates information, the behavioral output would likely look very similar. A combination of computational and electro-physiological studies would be required to distinguish these alternatives.</p>
<p>Our model is far from being complete. In transforming the statistical properties of the sensory signals through the different processing steps of movement planning, we only computed first-order approximations and hypothesized that all distributions remained Gaussian. This is of course a gross over-simplification; however, no statistical framework for arbitrary transformations of probability density functions exists. In addition, we only included relevant 2D motor planning computations. In the real world, this model would need to be expanded into 3D with all the added complexity (Blohm and Crawford,
<xref ref-type="bibr" rid="B4">2007</xref>
), i.e., non-commutative rotations, offset between rotation axes, non-linear sensory mappings and 3D behavioral constraints (such as Listing's law).</p>
</sec>
<sec>
<title>Implications</title>
<p>Our findings have implications for behavioral, perceptual, electrophysiological and brain imaging experiments. First, we have shown behaviorally, that body geometry signals can change the multi-sensory weightings in reach planning. Therefore, we also expect other contextual variables to have potential influences, such as gaze orientation, task/object value, or attention (Sober and Sabes,
<xref ref-type="bibr" rid="B53">2005</xref>
). Second, we have shown contextual influences on multi-sensory integration for action planning, but the question remains whether this is a generalized principle in the brain that would also influence perception.</p>
<p>Finally, our findings have implications for electrophysiological and brain imaging studies. Indeed, when identifying the function of brain areas, gain-like modulations in brain activity are often taken as an indicator for reference frame transformations. However, as previously noted (Denève et al.,
<xref ref-type="bibr" rid="B16">2001</xref>
), such modulations could also theoretically perform all kinds of other different functions involving the processing of different signals, such as attention, target selection or multi-sensory integration. Since all sensory and extra-sensory signals involved in these processes can be characterized by statistical distributions, computations involving these variables will evidently look like probabilistic population codes (Ma et al.,
<xref ref-type="bibr" rid="B42">2006</xref>
) – the suggested computational neuronal substrate of multi-sensory integration. Therefore, the only way to determine if a brain area is involved in multi-sensory integration is to generate sensory conflict and analyze the brain activity resulting from this situation in conjunction with behavioral performance (Nadler et al.,
<xref ref-type="bibr" rid="B46">2008</xref>
).</p>
</sec>
</sec>
<sec>
<title>Conclusions</title>
<p>In examining the effects of head roll on multi-sensory integration, we found that the brain incorporates contextual information about head position during a reaching task. We developed a new statistical model of reach planning combining reference frame transformations and multi-sensory integration to show that noisy reference frame transformations can alter the sensory reliability. This is evidence that the brain has online knowledge about the reliability of sensory and extra-sensory signals and includes this information into signal weighting, to ensure statistically optimal behavior.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>This work was supported by NSERC (Canada), CFI (Canada), the Botterell Fund (Queen's University, Kingston, ON, Canada) and ORF (Canada).</p>
</ack>
<app-group>
<app id="A1">
<title>Appendix</title>
<sec>
<title>Bayesian multi-sensory integration model</title>
<p>In the following sections, we describe the mathematical details of our model. We will assume that all sensory variables to have a certain estimate μ with Gaussian associated noise σ
<sup>2</sup>
. Joint angles will be denoted by θ whereas Euclidean variables are x. Vectors
<bold>x</bold>
are bold, matrices A are capitalized.</p>
</sec>
<sec>
<title>Forward/inverse kinematics</title>
<p>Figure
<xref ref-type="fig" rid="F3">3</xref>
B shows the arrangement of the body in the experimental setup with the hand at the IHP location. Since in our case the forearm was approximately parallel to the work surface (right panel of Figure
<xref ref-type="fig" rid="F3">3</xref>
B), we can fully characterize the spatial arm position
<bold>x</bold>
as a function of two joint angles
<bold>θ</bold>
, i.e., deviation from straight-ahead (θ
<sub>1</sub>
) and upper arm elevation (θ
<sub>2</sub>
):</p>
<disp-formula id="E1">
<label>(1)</label>
<mml:math id="M1">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="E2">
<label>(2)</label>
<mml:math id="M2">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where L
<sub>1/2</sub>
are the upper arm and forearm lengths respectively. In order to compute the
<italic>inverse kinematic</italic>
transformation of the noise covariance matrix, we used a first-order Taylor expansion of
<bold>x</bold>
(
<bold>θ</bold>
) around current joint angles
<bold>θ</bold>
<sup>0</sup>
, i.e.,
<inline-formula>
<mml:math id="M3">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:msup>
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mn>0</mml:mn>
</mml:msup>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mo>Σ</mml:mo>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>/</mml:mo>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:msub>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mn>0</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mi>i</mml:mi>
<mml:mn>0</mml:mn>
</mml:msubsup>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>.</mml:mo>
<mml:mtext>x</mml:mtext>
</mml:mrow>
</mml:math>
</inline-formula>
can then be written as a linear combination of
<bold>θ</bold>
, i.e.
<bold>
<italic>x</italic>
</bold>
 = A
<bold>θ</bold>
+ 
<bold>
<italic>b</italic>
</bold>
, with</p>
<disp-formula id="E3">
<label>(3)</label>
<mml:math id="M4">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msup>
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mn>0</mml:mn>
</mml:msup>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
<mml:mo></mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
<mml:mo></mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>This allows us to write the covariance matrix ∑ of
<bold>x</bold>
as a function of the covariance matrix of
<bold>θ</bold>
, as:</p>
<disp-formula id="E4">
<label>(4)</label>
<mml:math id="M5">
<mml:mrow>
<mml:msub>
<mml:mo></mml:mo>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msup>
<mml:mtext>θ</mml:mtext>
<mml:mn>0</mml:mn>
</mml:msup>
<mml:mo stretchy="false">)</mml:mo>
<mml:msub>
<mml:mo></mml:mo>
<mml:mtext>θ</mml:mtext>
</mml:msub>
<mml:mi>A</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msup>
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mn>0</mml:mn>
</mml:msup>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mtext>T</mml:mtext>
</mml:msup>
</mml:mrow>
</mml:math>
</disp-formula>
<p>The same approach can be used to compute the
<italic>forward kinematics</italic>
with</p>
<disp-formula id="E5">
<label>(5)</label>
<mml:math id="M6">
<mml:mrow>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>tan</mml:mi>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="E6">
<label>(6)</label>
<mml:math id="M7">
<mml:mrow>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>and</p>
<disp-formula id="E7">
<label>(7)</label>
<mml:math id="M8">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msup>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo></mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mn>0</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo></mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mi>c</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>with</p>
<disp-formula id="E8">
<label>(8)</label>
<mml:math id="M9">
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mn>0</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Then,</p>
<disp-formula id="E9">
<label>(9)</label>
<mml:math id="M10">
<mml:mrow>
<mml:msub>
<mml:mo></mml:mo>
<mml:mtext>θ</mml:mtext>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mtext>A</mml:mtext>
<mml:mo stretchy="false">(</mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msup>
<mml:mo stretchy="false">)</mml:mo>
<mml:msub>
<mml:mo></mml:mo>
<mml:mi>X</mml:mi>
</mml:msub>
<mml:mtext>A</mml:mtext>
<mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msup>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mtext>T</mml:mtext>
</mml:msup>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
<sec>
<title>Reference frame transformation</title>
<p>In our case of head roll movements, the required shoulder-centered-to-retinal coordinate transformation (T) simply consists of a rotation of the angle θ
<sub>H</sub>
 = βH, where β is a gain factor and H is the estimate of the head roll angle. Euclidean position in visual coordinates (
<bold>x</bold>
<sub>V</sub>
) can thus be obtained from Euclidean position in proprioceptive coordinates (
<bold>x</bold>
<sub>P</sub>
) using
<bold>x</bold>
<sub>V</sub>
 = T
<bold>x</bold>
<sub>P</sub>
with</p>
<disp-formula id="E10">
<label>(10)</label>
<mml:math id="M11">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>H</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>H</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>H</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>H</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Since head roll (H) and thus θ
<sub>H</sub>
are noisy variables, the transformation T introduces new noise on top of rotating the proprioceptive (P) covariance matrix into visual coordinates (V). We designed this new noise to be composed of a constant component
<inline-formula>
<mml:math id="M12">
<mml:mrow>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mo>Τ</mml:mo>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
simulating the fact that all transformations have a cost (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
) and a head orientation signal-dependent component Σ
<sub>H</sub>
.</p>
<disp-formula id="E11">
<label>(11)</label>
<mml:math id="M13">
<mml:mrow>
<mml:msub>
<mml:mo></mml:mo>
<mml:mtext>V</mml:mtext>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mtext>T</mml:mtext>
<mml:msub>
<mml:mo></mml:mo>
<mml:mtext>P</mml:mtext>
</mml:msub>
<mml:msup>
<mml:mtext>T</mml:mtext>
<mml:mtext>T</mml:mtext>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mtext>T</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mi>I</mml:mi>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mo></mml:mo>
<mml:mtext>H</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
<p>From random matrix theory we know that any matrix can be decomposed into a constant and variable component, such that A = A
<sub>0</sub>
 + E, where A
<sub>0</sub>
has 0 variance and E 0 mean. Then, perturbation theory tells us that any linear transformation of a noisy variable
<bold>x</bold>
 = 
<bold>x</bold>
<sub>0</sub>
 + 
<bold>e</bold>
can be written
<bold>y</bold>
 = A
<bold>x</bold>
 = (A
<sub>0</sub>
 + E)(
<bold>x</bold>
<sub>0</sub>
 + 
<bold>e</bold>
) = A
<sub>0</sub>
<bold>x</bold>
<sub>0</sub>
 + A
<sub>0</sub>
<bold>e</bold>
+ E
<bold>x</bold>
<sub>0</sub>
 + E
<bold>e</bold>
. The covariance of
<bold>y</bold>
can then be approximated by the covariance of A
<sub>0</sub>
<bold>e</bold>
+ E
<bold>x</bold>
<sub>0</sub>
, since the covariance of E
<bold>e</bold>
is negligible and A
<sub>0</sub>
<bold>x</bold>
<sub>0</sub>
has 0 covariance. Thus
<inline-formula>
<mml:math id="M14">
<mml:mrow>
<mml:msub>
<mml:mo>Σ</mml:mo>
<mml:mtext>y</mml:mtext>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mtext>A</mml:mtext>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:msub>
<mml:mo>Σ</mml:mo>
<mml:mtext>x</mml:mtext>
</mml:msub>
<mml:msubsup>
<mml:mtext>A</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mtext>T</mml:mtext>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mo>Σ</mml:mo>
<mml:mtext>E</mml:mtext>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
In our case, the matrix Σ
<sub>E</sub>
represents the variability resulting from the fact that the angle of the reference frame transformation is variable. This results in variability added to the direction orthogonal of
<bold>y</bold>
. Representing
<bold>y</bold>
in polar coordinates
<inline-formula>
<mml:math id="M15">
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
results in:</p>
<disp-formula id="E12">
<label>(12)</label>
<mml:math id="M16">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mtext>H</mml:mtext>
<mml:mo>,</mml:mo>
<mml:mtext>ij</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>H</mml:mi>
<mml:mo>|</mml:mo>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mi>H</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext></mml:mtext>
<mml:mo>=</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>H</mml:mi>
<mml:mo>|</mml:mo>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mi>H</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>Note that, as expected, this term introduces errors perpendicular to the rotated vector. The reason for this is that variability in the rotation leads to noise only in the rotational direction around the transformed vector
<bold>y</bold>
.</p>
<p>The inverse transformation and associated covariance matrix can simply be computed by replacing the head roll angle H by –H.</p>
</sec>
<sec>
<title>Multi-sensory integration</title>
<p>At the heart of the model is the multi-sensory integration step that combines proprioceptive and visual sensory information. In our model (Figure
<xref ref-type="fig" rid="F3">3</xref>
A), this integration occurs twice, once in visual coordinates as part of computing the visual desired movement vector and once in proprioceptive coordinates, which is required to transform the desired movement vector into a change in joint angles when determining the motor command. From basic multivariate Gaussian statistics, the means
<bold>μ</bold>
and covariances ∑ of the combined IHP estimates from vision (V) and proprioception (P) writes as:</p>
<disp-formula id="E13">
<label>(13)</label>
<mml:math id="M17">
<mml:mrow>
<mml:mo></mml:mo>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>P</mml:mtext>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>V</mml:mtext>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="E14">
<label>(14)</label>
<mml:math id="M18">
<mml:mrow>
<mml:mtext mathvariant="bold">μ</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>P</mml:mtext>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mo mathvariant="bold">μ</mml:mo>
<mml:mtext>P</mml:mtext>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>V</mml:mtext>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mo mathvariant="bold">μ</mml:mo>
<mml:mtext>V</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
<p>As mentioned above, this calculation is carried out twice, once in proprioceptive and once in visual coordinates. In visual coordinates, the sensory Euclidean visual information is combined with the transformed (forward kinematics and reference frame transformation) proprioceptive information (Euclidean). In proprioceptive coordinates, the sensory proprioceptive joint angles are combined with the visual information transformed into joint coordinates (inverse reference frame transformation and inverse kinematics).</p>
<p>To recover the weight matrix of the visual IHP estimate, we used
<inline-formula>
<mml:math id="M19">
<mml:mrow>
<mml:mo>Σ</mml:mo>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mo>Σ</mml:mo>
<mml:mtext>V</mml:mtext>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mtext>α</mml:mtext>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="M20">
<mml:mrow>
<mml:mo>Σ</mml:mo>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mo>Σ</mml:mo>
<mml:mtext>P</mml:mtext>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:mtext>α</mml:mtext>
</mml:mrow>
</mml:math>
</inline-formula>
to follow (from subtraction of one from the other) that
<inline-formula>
<mml:math id="M21">
<mml:mrow>
<mml:mtext>α</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo stretchy="false">(</mml:mo>
<mml:mo>Σ</mml:mo>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mo>Σ</mml:mo>
<mml:mtext>V</mml:mtext>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mo>Σ</mml:mo>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mo>Σ</mml:mo>
<mml:mtext>P</mml:mtext>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:mi>I</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
, where
<italic>I</italic>
is the identity matrix.</p>
</sec>
<sec>
<title>Final motor command</title>
<p>Once the IHP estimate from the previous step has been subtracted from the target location (
<inline-formula>
<mml:math id="M22">
<mml:mrow>
<mml:mtext>Δx</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mtext>tar</mml:mtext>
<mml:mo>-</mml:mo>
<mml:mtext mathvariant="bold">μ,</mml:mtext>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mrow>
<mml:mtext>Δ</mml:mtext>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mtext>μ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mrow>
<mml:mtext>tar</mml:mtext>
</mml:mrow>
<mml:mtext>2</mml:mtext>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
), the resulting desired movement vector Δ
<bold>x</bold>
needs to be transformed into a motor command
<inline-formula>
<mml:math id="M23">
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
. Here, we used a previously described velocity command model (Sober and Sabes,
<xref ref-type="bibr" rid="B52">2003</xref>
;
<xref ref-type="bibr" rid="B53">2005</xref>
) to perform this step as follows:</p>
<disp-formula id="E15">
<label>(15)</label>
<mml:math id="M24">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:mi>J</mml:mi>
<mml:mtext>(</mml:mtext>
<mml:mo mathvariant="bold">θ</mml:mo>
<mml:mtext>)</mml:mtext>
<mml:msup>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mtext>1</mml:mtext>
</mml:mrow>
</mml:msup>
<mml:mtext>(</mml:mtext>
<mml:mover accent="true">
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
<mml:mtext>)</mml:mtext>
<mml:mo>Δ</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where
<italic>J</italic>
is the Jacobian of the system,
<bold>θ</bold>
is the actual joint configuration and
<inline-formula>
<mml:math id="M25">
<mml:mrow>
<mml:mover accent="true">
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula>
is the estimated joint configuration from the multi-sensory integration step in proprioceptive coordinates. The Jacobian matrix is defined as
<inline-formula>
<mml:math id="M26">
<mml:mrow>
<mml:msub>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mtext>(</mml:mtext>
<mml:mo mathvariant="bold">θ</mml:mo>
<mml:mtext>)</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>/</mml:mo>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
. In our case, the Jacobian and its inverse write as:</p>
<disp-formula id="E16">
<label>(16)</label>
<mml:math id="M27">
<mml:mrow>
<mml:mi>J</mml:mi>
<mml:mtext>(</mml:mtext>
<mml:mo mathvariant="bold">θ</mml:mo>
<mml:mtext>)</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo></mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo></mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="E17">
<label>(17)</label>
<mml:math id="M28">
<mml:mrow>
<mml:msup>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mtext>(</mml:mtext>
<mml:mo mathvariant="bold">θ</mml:mo>
<mml:mtext>)</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<p>To compute the covariance of the motor command, we need to propagate the variances through Eq.
<xref ref-type="disp-formula" rid="E15">15</xref>
. To do so, we first re-write Eq.
<xref ref-type="disp-formula" rid="E15">15</xref>
as
<inline-formula>
<mml:math id="M29">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:mi>J</mml:mi>
<mml:mtext>(</mml:mtext>
<mml:mo mathvariant="bold">θ</mml:mo>
<mml:mtext>)</mml:mtext>
<mml:mo>Δ</mml:mo>
<mml:mtext mathvariant="bold">θ</mml:mtext>
</mml:mrow>
</mml:math>
</inline-formula>
with
<inline-formula>
<mml:math id="M30">
<mml:mrow>
<mml:mo>Δ</mml:mo>
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mtext>(</mml:mtext>
<mml:mover accent="true">
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
<mml:mtext>)</mml:mtext>
<mml:mo>Δ</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
. Since
<italic>J</italic>
(
<bold>θ</bold>
) is a constant transformation matrix, the covariance matrix of the final motor command can be written as:</p>
<disp-formula id="E18">
<label>(18)</label>
<mml:math id="M31">
<mml:mrow>
<mml:msub>
<mml:mo></mml:mo>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mtext></mml:mtext>
<mml:mi>J</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mo mathvariant="bold">θ</mml:mo>
<mml:mo stretchy="false">)</mml:mo>
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>Δ</mml:mo>
<mml:mtext mathvariant="bold">θ</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mi>J</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mtext>T</mml:mtext>
</mml:msup>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<p>It remains to calculate the covariance matrix Σ
<sub>Δθ</sub>
of the motor command expressed in joint angles. Since
<inline-formula>
<mml:math id="M32">
<mml:mrow>
<mml:msup>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mtext>(</mml:mtext>
<mml:mover accent="true">
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
<mml:mtext>)</mml:mtext>
</mml:mrow>
</mml:math>
</inline-formula>
depends on a noisy estimate of the joint angles in proprioceptive coordinates, we again have to apply random matrix theory to approximate the noise induced by
<inline-formula>
<mml:math id="M33">
<mml:mrow>
<mml:msup>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mtext>(</mml:mtext>
<mml:mover accent="true">
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
<mml:mtext>)</mml:mtext>
</mml:mrow>
</mml:math>
</inline-formula>
, as follows:</p>
<disp-formula id="E19">
<label>(19)</label>
<mml:math id="M34">
<mml:mrow>
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>Δ</mml:mo>
<mml:mtext mathvariant="bold">θ</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">(</mml:mo>
<mml:mover accent="true">
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
<mml:mo stretchy="false">)</mml:mo>
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mtext>Δ</mml:mtext>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mover accent="true">
<mml:mtext mathvariant="bold">θ</mml:mtext>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mtext>T</mml:mtext>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
<p>The covariance matrix
<inline-formula>
<mml:math id="M35">
<mml:mrow>
<mml:msub>
<mml:mo>Σ</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
associated with the noisy inverse Jacobian is computed similar to Eq.
<xref ref-type="disp-formula" rid="E12">12</xref>
as follows (using multivariate Taylor expansion):</p>
<disp-formula id="E20">
<label>(20)</label>
<mml:math id="M36">
<mml:mrow>
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>l</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:msub>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>l</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</disp-formula>
<p>with</p>
<disp-formula id="E21">
<mml:math id="M37">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>1</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>a</mml:mi>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>1</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>1</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>2</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:msup>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>1</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>1</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>1</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>b</mml:mi>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>1</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>1</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mtext>θ</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>2</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>2</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>b</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>1</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>1</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:mtext>Δ</mml:mtext>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>and using
<italic>a</italic>
 = L
<sub>1</sub>
<inline-formula>
<mml:math id="M38">
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mn>2</mml:mn>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
and
<italic>b</italic>
 = 
<italic>L</italic>
<sub>1</sub>
<inline-formula>
<mml:math id="M39">
<mml:mrow>
<mml:mi>cos</mml:mi>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mn>2</mml:mn>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
with
<inline-formula>
<mml:math id="M40">
<mml:mrow>
<mml:msubsup>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mtext>i</mml:mtext>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
being the predicted arm configuration after execution of the motor plan Δ
<bold>
<italic>x</italic>
</bold>
(Eqs.
<xref ref-type="disp-formula" rid="E5">5</xref>
and
<xref ref-type="disp-formula" rid="E6">6</xref>
). Note that
<inline-formula>
<mml:math id="M41">
<mml:mrow>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mtext>θ</mml:mtext>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>J</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
are the elements of the covariance matrix of
<bold>θ</bold>
(from Eq.
<xref ref-type="disp-formula" rid="E14">14</xref>
).</p>
</sec>
<sec>
<title>Movement direction</title>
<p>We were only interested in the initial movement direction, as the model does not capture movement execution dynamics. Therefore, we transformed the final motor command
<inline-formula>
<mml:math id="M42">
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
from Cartesian into Polar coordinates. To transform both the means and covariance matrix into polar coordinates, we used the following formula:</p>
<disp-formula id="E22">
<label>(21)</label>
<mml:math id="M43">
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="E23">
<label>(22)</label>
<mml:math id="M44">
<mml:mrow>
<mml:mi>tan</mml:mi>
<mml:mo></mml:mo>
<mml:mtext> ϕ</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>To obtain the variance or movement direction towards different targets, we rotated the covariance matrix by the angle of movement direction. For the maximum likelihood estimation (MLE) procedure described below, we then only used Σ
<sub>(
<bold>r</bold>
,φ),22</sub>
, i.e., the variance orthogonal to the movement angle, and transformed it into angular units.</p>
</sec>
<sec>
<title>Model fitting: maximum likelihood estimation</title>
<p>To estimate the model parameters from the data, we used a standard maximum likelihood estimation procedure. To do so, we calculated the negative log-likelihood (
<italic>L</italic>
) for our data to fit the above model given the set of fitting parameters ρ:</p>
<disp-formula id="E24">
<label>(23)</label>
<mml:math id="M45">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mtext>ρ</mml:mtext>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mtext>μ</mml:mtext>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:msup>
<mml:mo>σ</mml:mo>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>|</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mi>n</mml:mi>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mi>ln</mml:mi>
<mml:mo></mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>2</mml:mn>
<mml:mtext>π</mml:mtext>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mi>n</mml:mi>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mi>ln</mml:mi>
<mml:mo></mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:msup>
<mml:mo>σ</mml:mo>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msup>
<mml:mo>σ</mml:mo>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:msub>
<mml:mo></mml:mo>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mtext>μ</mml:mtext>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where (μ, σ
<sup>2</sup>
) are the mean and variance resulting from the model given the parameter set ρ,
<italic>n</italic>
is the number of data points and
<bold>y</bold>
contains the data measured from the experiment. We can then search for the maximum likelihood estimate by minimizing
<italic>L</italic>
<sub>ρ</sub>
over the parameter space, as:</p>
<disp-formula id="E25">
<label>(24)</label>
<mml:math id="M46">
<mml:mrow>
<mml:mi>arg</mml:mi>
<mml:mo></mml:mo>
<mml:mtext></mml:mtext>
<mml:msub>
<mml:mrow>
<mml:mi>min</mml:mi>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mtext>ρ</mml:mtext>
</mml:msub>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mtext>ρ</mml:mtext>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtext>μ</mml:mtext>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mo>σ</mml:mo>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>|</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<p>These computations were carried out in Matlab R2007a (The Mathworks, Natick, MA, USA) using the fmincon.m (for Eq.
<xref ref-type="disp-formula" rid="E25">24</xref>
) function.</p>
<p>To fit Sober and Sabes’ (
<xref ref-type="bibr" rid="B52">2003</xref>
) original model to our data, we used a standard non-linear least-squares regression method. The model equations were the same as for the full model, but without considering variances or reference frame transformations. Visual and proprioceptive information were simply combined using scalar weights, as in Sober and Sabes (
<xref ref-type="bibr" rid="B52">2003</xref>
,
<xref ref-type="bibr" rid="B53">2005</xref>
).</p>
</sec>
<sec>
<title>Model parameters</title>
<p>Upper arm and lower arm (including fist) lengths were set constant to
<italic>L</italic>
<sub>1</sub>
 = 30 and
<italic>L</italic>
<sub>2</sub>
 = 45 cm respectively. Shoulder location was assumed 30 cm backward from the target and 25 cm rightward of the target. Forward kinematics (Eqs. 5 and 6) for the center target location directly leads to IHP joint angles of θ
<sub>1</sub>
 = 42.5° and θ
<sub>2</sub>
 = −8.3° for the deviation of straight-ahead and the upper arm elevation respectively. IHPs and target positions were taken from the experimental data.</p>
<p>There were five parameters in the model that were identified from the data, i.e. the variances of both proprioceptive (
<inline-formula>
<mml:math id="M47">
<mml:mrow>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mtext>P</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
) joint angles (same for both) and horizontal visual (
<inline-formula>
<mml:math id="M48">
<mml:mrow>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mtext>V</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
) IHP, the variance associated to the head roll angle (
<inline-formula>
<mml:math id="M49">
<mml:mrow>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mtext>H</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
), a fixed reference frame transformation cost ( Σ
<sub>H</sub>
) and the head rotation gain for the reference frame transformation (β). The variance of target position (
<inline-formula>
<mml:math id="M50">
<mml:mrow>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mrow>
<mml:mtext>tar</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
) was fixed. To account for the fact that visual distance estimation is less reliable than visual angular position estimation, we set the distance variability to
<inline-formula>
<mml:math id="M51">
<mml:mrow>
<mml:mn>2.5</mml:mn>
<mml:mo>×</mml:mo>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mtext>v</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
(evaluated from McIntyre et al.,
<xref ref-type="bibr" rid="B43">1997</xref>
; Ren et al.,
<xref ref-type="bibr" rid="B48">2006</xref>
,
<xref ref-type="bibr" rid="B47">2007</xref>
).</p>
<p>The best-fit model parameters are represented in Table
<xref ref-type="table" rid="T1">1</xref>
. They were obtained through bootstrapping analysis (
<italic>N</italic>
 = 100). We used a minimum number of model parameters to describe our data. In particular, we did not have two independent joint angles, as our data were not compelling enough to distinguish the effect of both.</p>
<table-wrap id="T1" position="anchor">
<label>Table 1</label>
<caption>
<p>
<bold>Model parameter fits obtained through bootstrapping analysis (
<italic>N</italic>
 = 100): means ± SD (see
<xref ref-type="app" rid="A1">Appendix 1</xref>
for details)</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Parameter</th>
<th align="left" rowspan="1" colspan="1">Meaning</th>
<th align="left" rowspan="1" colspan="1">Values</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="M52">
<mml:mrow>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mtext>p</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">Proprioceptive variance</td>
<td align="left" rowspan="1" colspan="1">6.44 · 10
<sup>−6</sup>
 ± 0.75 · 10
<sup>−6</sup>
 (rad
<sup>2</sup>
)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="M53">
<mml:mrow>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mtext>V</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">Visual variance</td>
<td align="left" rowspan="1" colspan="1">0.347 ± 0.019 (mm
<sup>2</sup>
)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="M54">
<mml:mrow>
<mml:msubsup>
<mml:mo>σ</mml:mo>
<mml:mtext>H</mml:mtext>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">Head-roll-dependent variance</td>
<td align="left" rowspan="1" colspan="1">2.46 · 10
<sup>−3</sup>
 ± 1.04 · 10
<sup>−3</sup>
 (rad
<sup>2</sup>
/deg)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Σ
<sub>H</sub>
</td>
<td align="left" rowspan="1" colspan="1">Constant transformation noise</td>
<td align="left" rowspan="1" colspan="1">0.297 ± 0.031 (mm
<sup>2</sup>
)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">β</td>
<td align="left" rowspan="1" colspan="1">Head roll compensation gain</td>
<td align="left" rowspan="1" colspan="1">1.041 ± 0.009 (.)</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</app>
</app-group>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Andersen</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Mountcastle</surname>
<given-names>V. B.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex</article-title>
.
<source>J. Neurosci.</source>
<volume>3</volume>
,
<fpage>532</fpage>
<lpage>548</lpage>
<pub-id pub-id-type="pmid">6827308</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Armstrong</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>McNair</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Head and neck position sense</article-title>
.
<source>Sports Med.</source>
<volume>38</volume>
,
<fpage>101</fpage>
<lpage>117</lpage>
<pub-id pub-id-type="doi">10.2165/00007256-200838020-00002</pub-id>
<pub-id pub-id-type="pmid">18201114</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Atkins</surname>
<given-names>J. E.</given-names>
</name>
<name>
<surname>Fiser</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Experience-dependent visual cue integration based on consistencies between visual and haptic precepts</article-title>
.
<source>Vision Res.</source>
<volume>41</volume>
,
<fpage>449</fpage>
<lpage>461</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(00)00254-6</pub-id>
<pub-id pub-id-type="pmid">11166048</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blohm</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Crawford</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Computations for geometrically accurate visually guided reaching in 3-D space</article-title>
.
<source>J. Vis.</source>
<volume>7</volume>
,
<fpage>1</fpage>
<lpage>22</lpage>
<pub-id pub-id-type="doi">10.1167/7.5.4</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blohm</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Keith</surname>
<given-names>G. P.</given-names>
</name>
<name>
<surname>Crawford</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Decoding the cortical transformations for visually guided reaching in 3D space</article-title>
.
<source>Cereb. Cortex</source>
<volume>19</volume>
,
<fpage>1372</fpage>
<lpage>1393</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhn177</pub-id>
<pub-id pub-id-type="pmid">18842662</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bockisch</surname>
<given-names>C. J.</given-names>
</name>
<name>
<surname>Haslwanter</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>3D eye position during static roll and pitch in humans</article-title>
.
<source>Vision Res.</source>
<volume>41</volume>
,
<fpage>2127</fpage>
<lpage>2137</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(01)00094-3</pub-id>
<pub-id pub-id-type="pmid">11403796</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buneo</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>Andersen</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>The posterior parietal cortex: sensorimotor interface for the planning and online control of visually guided movements</article-title>
.
<source>Neuropsychologia</source>
<volume>44</volume>
,
<fpage>2594</fpage>
<lpage>2606</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2005.10.011</pub-id>
<pub-id pub-id-type="pmid">16300804</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buneo</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>Jarvis</surname>
<given-names>M. R.</given-names>
</name>
<name>
<surname>Batista</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Andersen</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Direct visuomotor transformations for reaching</article-title>
.
<source>Nature</source>
<volume>416</volume>
,
<fpage>632</fpage>
<lpage>636</lpage>
<pub-id pub-id-type="doi">10.1038/416632a</pub-id>
<pub-id pub-id-type="pmid">11948351</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burke</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hagbarth</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Lofstedt</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Wallin</surname>
<given-names>B. G.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>The responses of human muscle spindle endings to vibration during isometric contraction</article-title>
.
<source>J. Physiol.</source>
<volume>261</volume>
,
<fpage>695</fpage>
<lpage>711</lpage>
<pub-id pub-id-type="pmid">135841</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Marrone</surname>
<given-names>M. C.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Auditory dominance over vision in the perception of interval duration</article-title>
.
<source>Exp. Brain Res.</source>
<volume>198</volume>
,
<fpage>49</fpage>
<lpage>52</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-009-1933-z</pub-id>
<pub-id pub-id-type="pmid">19597804</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Clark</surname>
<given-names>F. J.</given-names>
</name>
<name>
<surname>Burgess</surname>
<given-names>P. R.</given-names>
</name>
</person-group>
(
<year>1975</year>
).
<article-title>Slowly adapting receptors in cat knee joint: Can they signal joint angle?</article-title>
<source>J. Neurophysiol.</source>
<volume>38</volume>
,
<fpage>1448</fpage>
<lpage>1463</lpage>
<pub-id pub-id-type="pmid">1221082</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chang</surname>
<given-names>S. W. C.</given-names>
</name>
<name>
<surname>Papadimitriou</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Snyder</surname>
<given-names>L. H.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Using a compound gain field to compute a reach plan</article-title>
.
<source>Neuron</source>
<volume>64</volume>
,
<fpage>744</fpage>
<lpage>755</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2009.11.005</pub-id>
<pub-id pub-id-type="pmid">20005829</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cohen</surname>
<given-names>Y. E.</given-names>
</name>
<name>
<surname>Andersen</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>A common reference frame for movement plans in the posterior parietal cortex</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>3</volume>
,
<fpage>553</fpage>
<lpage>562</lpage>
<pub-id pub-id-type="doi">10.1038/nrn873</pub-id>
<pub-id pub-id-type="pmid">12094211</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cordo</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Flores-Vieira</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Verschueren</surname>
<given-names>S. M. P.</given-names>
</name>
<name>
<surname>Inglis</surname>
<given-names>J. T.</given-names>
</name>
<name>
<surname>Gurfinke</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Positions sensitivity of human muscle spindles: Single afferent and population representations</article-title>
.
<source>J. Neurophysiol.</source>
<volume>87</volume>
,
<fpage>1186</fpage>
<lpage>1195</lpage>
<pub-id pub-id-type="pmid">11877492</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Collewijn</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Van der Steen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ferman</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Jansen</surname>
<given-names>T. C.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Human ocular counterroll: Assessment of static and dynamic properties from electromagnetic scleral coil recordings</article-title>
.
<source>Exp. Brain Res.</source>
<volume>59</volume>
,
<fpage>185</fpage>
<lpage>196</lpage>
<pub-id pub-id-type="doi">10.1007/BF00237678</pub-id>
<pub-id pub-id-type="pmid">4018196</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Denève</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>P.E.</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Efficient computation and cue integration with noisy population codes</article-title>
.
<source>Nat. Neurosci.</source>
<volume>4</volume>
,
<fpage>826</fpage>
<lpage>831</lpage>
<pub-id pub-id-type="doi">10.1038/90541</pub-id>
<pub-id pub-id-type="pmid">11477429</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dyde</surname>
<given-names>R. T.</given-names>
</name>
<name>
<surname>Jenkin</surname>
<given-names>M. R.</given-names>
</name>
<name>
<surname>Harris</surname>
<given-names>L. R.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>The subjective visual vertical and the perceptual upright</article-title>
.
<source>Exp. Brain Res.</source>
<volume>173</volume>
,
<fpage>612</fpage>
<lpage>622</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-006-0405-y</pub-id>
<pub-id pub-id-type="pmid">16550392</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Edin</surname>
<given-names>B. B.</given-names>
</name>
<name>
<surname>Vallbo</surname>
<given-names>A. B.</given-names>
</name>
</person-group>
(
<year>1990</year>
).
<article-title>Muscle afferent responses to isometric contractions and relaxations in humans</article-title>
.
<source>J. Neurophysiol.</source>
<volume>63</volume>
,
<fpage>1307</fpage>
<lpage>1313</lpage>
<pub-id pub-id-type="pmid">2358878</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Engel</surname>
<given-names>K. C.</given-names>
</name>
<name>
<surname>Flanders</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Soechting</surname>
<given-names>J. F.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Oculocentric frames of reference for limb movement</article-title>
.
<source>Arch. Ital. Biol.</source>
<volume>140</volume>
,
<fpage>211</fpage>
<lpage>219</lpage>
<pub-id pub-id-type="pmid">12173524</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Bulthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Merging the senses into a robust percept</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>8</volume>
,
<fpage>162</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2004.02.002</pub-id>
<pub-id pub-id-type="pmid">15050512</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Faisal</surname>
<given-names>A. A.</given-names>
</name>
<name>
<surname>Selen</surname>
<given-names>L. P.</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D. M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Noise in the nervous system</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>9</volume>
,
<fpage>292</fpage>
<lpage>303</lpage>
<pub-id pub-id-type="doi">10.1038/nrn2258</pub-id>
<pub-id pub-id-type="pmid">18319728</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fernandez</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Goldberg</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Abend</surname>
<given-names>W. K.</given-names>
</name>
</person-group>
(
<year>1972</year>
).
<article-title>Response to static tilts of peripheral neurons innervating otolith organs of the squirrel monkey</article-title>
.
<source>J. Neurophysiol.</source>
<volume>35</volume>
,
<fpage>978</fpage>
<lpage>987</lpage>
<pub-id pub-id-type="pmid">4631840</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gandevia</surname>
<given-names>S. C.</given-names>
</name>
<name>
<surname>McCloskey</surname>
<given-names>D. I.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>Joint sense, muscle sense and their combination as position sense, measured at the distal interphalangeal joint of the middle finger</article-title>
.
<source>J. Physiol.</source>
<volume>260</volume>
,
<fpage>387</fpage>
<lpage>407</lpage>
<pub-id pub-id-type="pmid">978533</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gellman</surname>
<given-names>R. S.</given-names>
</name>
<name>
<surname>Fletcher</surname>
<given-names>W. A.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Eye position signals in human saccadic processing</article-title>
.
<source>Exp. Brain Res.</source>
<volume>89</volume>
,
<fpage>425</fpage>
<lpage>434</lpage>
<pub-id pub-id-type="doi">10.1007/BF00228258</pub-id>
<pub-id pub-id-type="pmid">1623984</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ghahramani</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Jordan</surname>
<given-names>M. I.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>“Computational models of sensorimotor integration,”</article-title>
in
<source>Self-organization, Computational Maps and Motor Control</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Morasso</surname>
<given-names>P. G.</given-names>
</name>
<name>
<surname>Sanguineti</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<publisher-loc>Amsterdam</publisher-loc>
:
<publisher-name>North Holland</publisher-name>
),
<fpage>117</fpage>
<lpage>147</lpage>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goodwin</surname>
<given-names>G. M.</given-names>
</name>
<name>
<surname>McCloskey</surname>
<given-names>D. I.</given-names>
</name>
<name>
<surname>Mathews</surname>
<given-names>P. B. C.</given-names>
</name>
</person-group>
(
<year>1972</year>
).
<article-title>The contribution of muscle afferents to kinesthesis shown by vibration induced illusions of movement and by the effect of paralysing joint afferents</article-title>
.
<source>Brain</source>
<volume>95</volume>
,
<fpage>705</fpage>
<lpage>748</lpage>
<pub-id pub-id-type="doi">10.1093/brain/95.4.705</pub-id>
<pub-id pub-id-type="pmid">4265060</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Green</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>D. E.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Multisensory integration: resolving sensory ambiguities to build novel representations</article-title>
.
<source>Curr. Opin. Neurobiol.</source>
<volume>20</volume>
,
<fpage>353</fpage>
<lpage>360</lpage>
<pub-id pub-id-type="doi">10.1016/j.conb.2010.04.009</pub-id>
<pub-id pub-id-type="pmid">20471245</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hagura</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Takei</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Hirose</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Aramaki</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Matsumura</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sadata</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Naito</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Activity in the posterior parietal cortex mediates visual dominance over kinesthesia</article-title>
.
<source>J. Neurosci.</source>
<volume>27</volume>
,
<fpage>7047</fpage>
<lpage>7053</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0970-07.2007</pub-id>
<pub-id pub-id-type="pmid">17596454</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haslwanter</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Straumann</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hess</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Henn</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Static roll and pitch in the monkey: shift and rotation of listing's plane</article-title>
.
<source>Vision Res.</source>
<volume>32</volume>
,
<fpage>1341</fpage>
<lpage>1348</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(92)90226-9</pub-id>
<pub-id pub-id-type="pmid">1455706</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jordan</surname>
<given-names>M. I.</given-names>
</name>
<name>
<surname>Flahs</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Arnon</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>A model of the learning of arm trajectories from spatial deviations</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>6</volume>
,
<fpage>359</fpage>
<lpage>376</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.1994.6.4.359</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jordan</surname>
<given-names>M. I.</given-names>
</name>
<name>
<surname>Rumelhart</surname>
<given-names>D. E.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Forward models: Supervised learning with a distal teacher</article-title>
.
<source>Cogn. Sci.</source>
<volume>16</volume>
,
<fpage>307</fpage>
<lpage>354</lpage>
<pub-id pub-id-type="doi">10.1207/s15516709cog1603_1</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kersten</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Object perception as Bayesian inference</article-title>
.
<source>Annu. Rev. Psychol.</source>
<volume>55</volume>
,
<fpage>271</fpage>
<lpage>304</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.psych.55.090902.142005</pub-id>
<pub-id pub-id-type="pmid">14744217</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The Bayesian brain: the role of uncertainty in neural coding and computation</article-title>
.
<source>Trends Neurosci.</source>
<volume>27</volume>
,
<fpage>712</fpage>
<lpage>719</lpage>
<pub-id pub-id-type="doi">10.1016/j.tins.2004.10.007</pub-id>
<pub-id pub-id-type="pmid">15541511</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>K. P.</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>J. B.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>“Casual inference in sensorimotor integration</article-title>
.
<conf-name>NIPS 2006 conference proceedings,”</conf-name>
in
<source>Advances in Neural Information Processing Systems</source>
, Vol.
<volume>1</volume>
, eds.
<person-group person-group-type="editor">
<name>
<surname>Schölkopf</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Platt</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hoffman</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
),
<fpage>641</fpage>
<lpage>647</lpage>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lacquaniti</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Caminiti</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Visuo-motor transformations for arm reaching</article-title>
.
<source>Eur. J. Neurosci.</source>
<volume>10</volume>
,
<fpage>195</fpage>
<lpage>203</lpage>
<pub-id pub-id-type="doi">10.1046/j.1460-9568.1998.00040.x</pub-id>
<pub-id pub-id-type="pmid">9753127</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Landy</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Kojima</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Ideal cue combination for localizing texture-defined edges</article-title>
.
<source>J. Opt. Soc. Am. A.</source>
<volume>18</volume>
,
<fpage>2307</fpage>
<lpage>2320</lpage>
<pub-id pub-id-type="doi">10.1364/JOSAA.18.002307</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Landy</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Johnston</surname>
<given-names>E. B.</given-names>
</name>
<name>
<surname>Young</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Measurement and modeling of depth cue combination: in defense of weak fusion</article-title>
.
<source>Vision Res.</source>
<volume>35</volume>
,
<fpage>389</fpage>
<lpage>412</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(94)00176-M</pub-id>
<pub-id pub-id-type="pmid">7892735</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lechner-Steinleitner</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1978</year>
).
<article-title>Interaction of labyrinthine and somatoreceptor inputs as determinants of the subjective vertical</article-title>
.
<source>Psychol. Res.</source>
<volume>40</volume>
,
<fpage>65</fpage>
<lpage>76</lpage>
<pub-id pub-id-type="doi">10.1007/BF00308464</pub-id>
<pub-id pub-id-type="pmid">635075</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Terzoloulos</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Heads up!: Biomechanical modeling and neuromuscular control of the neck</article-title>
.
<source>ACM Trans. Graph.</source>
<volume>25</volume>
,
<fpage>1188</fpage>
<lpage>1198</lpage>
<pub-id pub-id-type="doi">10.1145/1141911.1142013</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Matin</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Visual direction is corrected by a hybrid extraretinal eye position signal</article-title>
.
<source>Ann. N.Y. Acad. Sci.</source>
<volume>656</volume>
,
<fpage>865</fpage>
<lpage>867</lpage>
<pub-id pub-id-type="doi">10.1111/j.1749-6632.1992.tb25277.x</pub-id>
<pub-id pub-id-type="pmid">1599203</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>P. E.</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Bayesian inference with probabilistic population codes</article-title>
.
<source>Nat. Neurosci.</source>
<volume>9</volume>
,
<fpage>1432</fpage>
<lpage>1438</lpage>
<pub-id pub-id-type="doi">10.1038/nn1790</pub-id>
<pub-id pub-id-type="pmid">17057707</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McIntyre</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Stratta</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Lacquaniti</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Viewer-centered frame of reference for pointing to memorized targets in three-dimensional space</article-title>
.
<source>J. Neurophysiol.</source>
<volume>78</volume>
,
<fpage>1601</fpage>
<lpage>1618</lpage>
<pub-id pub-id-type="pmid">9310446</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McGuire</surname>
<given-names>L. M.</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>P. N.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Sensory transformations and the use of multiple reference frames for reach planning</article-title>
.
<source>Nat. Neurosci.</source>
<volume>12</volume>
,
<fpage>1056</fpage>
<lpage>1061</lpage>
<pub-id pub-id-type="doi">10.1038/nn.2357</pub-id>
<pub-id pub-id-type="pmid">19597495</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mon-Williams</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Wann</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Jenkinson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Rushton</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Synaesthesia in the normal limb</article-title>
.
<source>Proc. Biol. Sci.</source>
<volume>264</volume>
,
<fpage>1007</fpage>
<lpage>1010</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.1997.0139</pub-id>
<pub-id pub-id-type="pmid">9263468</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nadler</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>D. E.</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>G. C.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>A neural representation of depth from motion parallax in macaque visual cortex</article-title>
.
<source>Nature</source>
<volume>452</volume>
,
<fpage>642</fpage>
<lpage>645</lpage>
<pub-id pub-id-type="doi">10.1038/nature06814</pub-id>
<pub-id pub-id-type="pmid">18344979</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ren</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Blohm</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Crawford</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Comparing limb proprioception and oculomotor signals during hand-guided saccades</article-title>
.
<source>Exp. Brain Res.</source>
<volume>182</volume>
,
<fpage>189</fpage>
<lpage>198</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-007-0981-5</pub-id>
<pub-id pub-id-type="pmid">17551720</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ren</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Khan</surname>
<given-names>A. Z.</given-names>
</name>
<name>
<surname>Blohm</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Henriques</surname>
<given-names>D. Y. P.</given-names>
</name>
<name>
<surname>Sergio</surname>
<given-names>L. E.</given-names>
</name>
<name>
<surname>Crawford</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Proprioceptive guidance of saccades in eye-hand coordination</article-title>
.
<source>J. Neurophysi.</source>
<volume>96</volume>
,
<fpage>1464</fpage>
<lpage>1477</lpage>
<pub-id pub-id-type="doi">10.1152/jn.01012.2005</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rossetti</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Desmurget</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Prablanc</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Vectorial coding of movement: vision, proprioception, or both?</article-title>
<source>J. Neurophysiol.</source>
<volume>74</volume>
,
<fpage>457</fpage>
<lpage>463</lpage>
<pub-id pub-id-type="pmid">7472347</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sadeghi</surname>
<given-names>S. G.</given-names>
</name>
<name>
<surname>Chacron</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Taylor</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Cullen</surname>
<given-names>K. E.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Neural variability, detection thresholds, and information transmission in the vestibular system</article-title>
.
<source>J. Neurosci.</source>
<volume>27</volume>
,
<fpage>771</fpage>
<lpage>781</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4690-06.2007</pub-id>
<pub-id pub-id-type="pmid">17251416</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scott</surname>
<given-names>S. H.</given-names>
</name>
<name>
<surname>Loeb</surname>
<given-names>G. E.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>The computation of position-sense from spindles in mono- and multi-articular muscles</article-title>
.
<source>J. Neurosci.</source>
<volume>14</volume>
,
<fpage>7529</fpage>
<lpage>7540</lpage>
<pub-id pub-id-type="pmid">7996193</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sober</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>P. N.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Multisensory integration during motor planning</article-title>
.
<source>J. Neurosci.</source>
<volume>23</volume>
,
<fpage>6982</fpage>
<lpage>6992</lpage>
<pub-id pub-id-type="pmid">12904459</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sober</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>P. N.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Flexible strategies for sensory integration during motor planning</article-title>
.
<source>Nat. Neurosci.</source>
<volume>8</volume>
,
<fpage>490</fpage>
<lpage>497</lpage>
<pub-id pub-id-type="pmid">15793578</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Meredith</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<source>The Merging of the Senses.</source>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Stanford</surname>
<given-names>T. R.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Multisensory integration: current issues from the perspective of the single neuron</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>9</volume>
,
<fpage>255</fpage>
<lpage>266</lpage>
<pub-id pub-id-type="doi">10.1038/nrn2331</pub-id>
<pub-id pub-id-type="pmid">18354398</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tarnutzer</surname>
<given-names>A. A.</given-names>
</name>
<name>
<surname>Bockisch</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Straumann</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Olasagasti</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Gravity dependence of subjective visual vertical variability</article-title>
.
<source>J. Neurophysiol.</source>
<volume>102</volume>
,
<fpage>1657</fpage>
<lpage>1671</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00007.2008</pub-id>
<pub-id pub-id-type="pmid">19571203</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Beers</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Sittig</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Denier Van Der Gon</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Integration of proprioception and visual position-information: an experimentally supported model</article-title>
.
<source>J. Neurophysiol.</source>
<volume>81</volume>
,
<fpage>1355</fpage>
<lpage>1364</lpage>
<pub-id pub-id-type="pmid">10085361</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Beers</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Haggard</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>When feeling is more important than seeing in sensorimotor adaptation</article-title>
.
<source>Curr. Biol.</source>
<volume>12</volume>
,
<fpage>824</fpage>
<lpage>837</lpage>
<pub-id pub-id-type="doi">10.1016/S0960-9822(02)00836-9</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Beuzekom</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Van Gisbergen</surname>
<given-names>J. A. M.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Properties of the internal representation of gravity inferred from spatial-direction and body-tilt estimates</article-title>
.
<source>J. Neurophysiol.</source>
<volume>84</volume>
,
<fpage>11</fpage>
<lpage>27</lpage>
<pub-id pub-id-type="pmid">10899179</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wade</surname>
<given-names>S. W.</given-names>
</name>
<name>
<surname>Curthoys</surname>
<given-names>I. S.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>The effect of ocular torsional position on perception of the roll-tilt of visual stimuli</article-title>
.
<source>Vision Res.</source>
<volume>37</volume>
,
<fpage>1071</fpage>
<lpage>1078</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(96)00252-0</pub-id>
<pub-id pub-id-type="pmid">9196725</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Canada</li>
</country>
</list>
<tree>
<country name="Canada">
<noRegion>
<name sortKey="Burns, Jessica Katherine" sort="Burns, Jessica Katherine" uniqKey="Burns J" first="Jessica Katherine" last="Burns">Jessica Katherine Burns</name>
</noRegion>
<name sortKey="Blohm, Gunnar" sort="Blohm, Gunnar" uniqKey="Blohm G" first="Gunnar" last="Blohm">Gunnar Blohm</name>
<name sortKey="Blohm, Gunnar" sort="Blohm, Gunnar" uniqKey="Blohm G" first="Gunnar" last="Blohm">Gunnar Blohm</name>
<name sortKey="Burns, Jessica Katherine" sort="Burns, Jessica Katherine" uniqKey="Burns J" first="Jessica Katherine" last="Burns">Jessica Katherine Burns</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001817 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 001817 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3002464
   |texte=   Multi-Sensory Weights Depend on Contextual Noise in Reference Frame Transformations
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:21165177" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024