Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Inferring Visuomotor Priors for Sensorimotor Learning

Identifieur interne : 001B11 ( Pmc/Checkpoint ); précédent : 001B10; suivant : 001B12

Inferring Visuomotor Priors for Sensorimotor Learning

Auteurs : Edward J. A. Turnham ; Daniel A. Braun ; Daniel M. Wolpert

Source :

RBID : PMC:3068921

Abstract

Sensorimotor learning has been shown to depend on both prior expectations and sensory evidence in a way that is consistent with Bayesian integration. Thus, prior beliefs play a key role during the learning process, especially when only ambiguous sensory information is available. Here we develop a novel technique to estimate the covariance structure of the prior over visuomotor transformations – the mapping between actual and visual location of the hand – during a learning task. Subjects performed reaching movements under multiple visuomotor transformations in which they received visual feedback of their hand position only at the end of the movement. After experiencing a particular transformation for one reach, subjects have insufficient information to determine the exact transformation, and so their second reach reflects a combination of their prior over visuomotor transformations and the sensory evidence from the first reach. We developed a Bayesian observer model in order to infer the covariance structure of the subjects' prior, which was found to give high probability to parameter settings consistent with visuomotor rotations. Therefore, although the set of visuomotor transformations experienced had little structure, the subjects had a strong tendency to interpret ambiguous sensory evidence as arising from rotation-like transformations. We then exposed the same subjects to a highly-structured set of visuomotor transformations, designed to be very different from the set of visuomotor rotations. During this exposure the prior was found to have changed significantly to have a covariance structure that no longer favored rotation-like transformations. In summary, we have developed a technique which can estimate the full covariance structure of a prior in a sensorimotor task and have shown that the prior over visuomotor transformations favor a rotation-like structure. Moreover, through experience of a novel task structure, participants can appropriately alter the covariance structure of their prior.


Url:
DOI: 10.1371/journal.pcbi.1001112
PubMed: 21483475
PubMed Central: 3068921


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3068921

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Inferring Visuomotor Priors for Sensorimotor Learning</title>
<author>
<name sortKey="Turnham, Edward J A" sort="Turnham, Edward J A" uniqKey="Turnham E" first="Edward J. A." last="Turnham">Edward J. A. Turnham</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Braun, Daniel A" sort="Braun, Daniel A" uniqKey="Braun D" first="Daniel A." last="Braun">Daniel A. Braun</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Wolpert, Daniel M" sort="Wolpert, Daniel M" uniqKey="Wolpert D" first="Daniel M." last="Wolpert">Daniel M. Wolpert</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">21483475</idno>
<idno type="pmc">3068921</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3068921</idno>
<idno type="RBID">PMC:3068921</idno>
<idno type="doi">10.1371/journal.pcbi.1001112</idno>
<date when="2011">2011</date>
<idno type="wicri:Area/Pmc/Corpus">002177</idno>
<idno type="wicri:Area/Pmc/Curation">002177</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001B11</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Inferring Visuomotor Priors for Sensorimotor Learning</title>
<author>
<name sortKey="Turnham, Edward J A" sort="Turnham, Edward J A" uniqKey="Turnham E" first="Edward J. A." last="Turnham">Edward J. A. Turnham</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Braun, Daniel A" sort="Braun, Daniel A" uniqKey="Braun D" first="Daniel A." last="Braun">Daniel A. Braun</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Wolpert, Daniel M" sort="Wolpert, Daniel M" uniqKey="Wolpert D" first="Daniel M." last="Wolpert">Daniel M. Wolpert</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS Computational Biology</title>
<idno type="ISSN">1553-734X</idno>
<idno type="eISSN">1553-7358</idno>
<imprint>
<date when="2011">2011</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Sensorimotor learning has been shown to depend on both prior expectations and sensory evidence in a way that is consistent with Bayesian integration. Thus, prior beliefs play a key role during the learning process, especially when only ambiguous sensory information is available. Here we develop a novel technique to estimate the covariance structure of the prior over visuomotor transformations – the mapping between actual and visual location of the hand – during a learning task. Subjects performed reaching movements under multiple visuomotor transformations in which they received visual feedback of their hand position only at the end of the movement. After experiencing a particular transformation for one reach, subjects have insufficient information to determine the exact transformation, and so their second reach reflects a combination of their prior over visuomotor transformations and the sensory evidence from the first reach. We developed a Bayesian observer model in order to infer the covariance structure of the subjects' prior, which was found to give high probability to parameter settings consistent with visuomotor rotations. Therefore, although the set of visuomotor transformations experienced had little structure, the subjects had a strong tendency to interpret ambiguous sensory evidence as arising from rotation-like transformations. We then exposed the same subjects to a highly-structured set of visuomotor transformations, designed to be very different from the set of visuomotor rotations. During this exposure the prior was found to have changed significantly to have a covariance structure that no longer favored rotation-like transformations. In summary, we have developed a technique which can estimate the full covariance structure of a prior in a sensorimotor task and have shown that the prior over visuomotor transformations favor a rotation-like structure. Moreover, through experience of a novel task structure, participants can appropriately alter the covariance structure of their prior.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Faisal, Aa" uniqKey="Faisal A">AA Faisal</name>
</author>
<author>
<name sortKey="Selen, Lpj" uniqKey="Selen L">LPJ Selen</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Glimcher, Pw" uniqKey="Glimcher P">PW Glimcher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Helmholtz, H" uniqKey="Helmholtz H">H Helmholtz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Doya, K" uniqKey="Doya K">K Doya</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiss, Y" uniqKey="Weiss Y">Y Weiss</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
<author>
<name sortKey="Adelson, Eh" uniqKey="Adelson E">EH Adelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocker, Aa" uniqKey="Stocker A">AA Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Adams, Wj" uniqKey="Adams W">WJ Adams</name>
</author>
<author>
<name sortKey="Graf, Ew" uniqKey="Graf E">EW Graf</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Langer, Ms" uniqKey="Langer M">MS Langer</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Howe, Cq" uniqKey="Howe C">CQ Howe</name>
</author>
<author>
<name sortKey="Purves, D" uniqKey="Purves D">D Purves</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Flanagan, Jr" uniqKey="Flanagan J">JR Flanagan</name>
</author>
<author>
<name sortKey="Bowman, Mc" uniqKey="Bowman M">MC Bowman</name>
</author>
<author>
<name sortKey="Johansson, Rs" uniqKey="Johansson R">RS Johansson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brayanov, Jb" uniqKey="Brayanov J">JB Brayanov</name>
</author>
<author>
<name sortKey="Smith, Ma" uniqKey="Smith M">MA Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Flanagan, Jr" uniqKey="Flanagan J">JR Flanagan</name>
</author>
<author>
<name sortKey="Beltzner, Ma" uniqKey="Beltzner M">MA Beltzner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kemp, C" uniqKey="Kemp C">C Kemp</name>
</author>
<author>
<name sortKey="Tenenbaum, Jb" uniqKey="Tenenbaum J">JB Tenenbaum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tenenbaum, Jb" uniqKey="Tenenbaum J">JB Tenenbaum</name>
</author>
<author>
<name sortKey="Griffiths, Tl" uniqKey="Griffiths T">TL Griffiths</name>
</author>
<author>
<name sortKey="Kemp, C" uniqKey="Kemp C">C Kemp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Acuna, D" uniqKey="Acuna D">D Acuna</name>
</author>
<author>
<name sortKey="Schrater, Pr" uniqKey="Schrater P">PR Schrater</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Griffiths, Tl" uniqKey="Griffiths T">TL Griffiths</name>
</author>
<author>
<name sortKey="Kalish, Ml" uniqKey="Kalish M">ML Kalish</name>
</author>
<author>
<name sortKey="Lewandowsky, S" uniqKey="Lewandowsky S">S Lewandowsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sanborn, A" uniqKey="Sanborn A">A Sanborn</name>
</author>
<author>
<name sortKey="Griffiths, T" uniqKey="Griffiths T">T Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Körding</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Körding</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, Rj" uniqKey="Van Beers R">RJ van Beers</name>
</author>
<author>
<name sortKey="Sittig, Ac" uniqKey="Sittig A">AC Sittig</name>
</author>
<author>
<name sortKey="Van Der Gon, Jjd" uniqKey="Van Der Gon J">JJD van der Gon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Girshick, Ar" uniqKey="Girshick A">AR Girshick</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Todorov, E" uniqKey="Todorov E">E Todorov</name>
</author>
<author>
<name sortKey="Jordan, Mi" uniqKey="Jordan M">MI Jordan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Todorov, E" uniqKey="Todorov E">E Todorov</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scott, Sh" uniqKey="Scott S">SH Scott</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Diedrichsen, J" uniqKey="Diedrichsen J">J Diedrichsen</name>
</author>
<author>
<name sortKey="Shadmehr, R" uniqKey="Shadmehr R">R Shadmehr</name>
</author>
<author>
<name sortKey="Ivry, Rb" uniqKey="Ivry R">RB Ivry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Diedrichsen, J" uniqKey="Diedrichsen J">J Diedrichsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Braun, Da" uniqKey="Braun D">DA Braun</name>
</author>
<author>
<name sortKey="Ortega, Pa" uniqKey="Ortega P">PA Ortega</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Izawa, J" uniqKey="Izawa J">J Izawa</name>
</author>
<author>
<name sortKey="Rane, T" uniqKey="Rane T">T Rane</name>
</author>
<author>
<name sortKey="Donchin, O" uniqKey="Donchin O">O Donchin</name>
</author>
<author>
<name sortKey="Shadmehr, R" uniqKey="Shadmehr R">R Shadmehr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen Harris, H" uniqKey="Chen Harris H">H Chen-Harris</name>
</author>
<author>
<name sortKey="Joiner, Wm" uniqKey="Joiner W">WM Joiner</name>
</author>
<author>
<name sortKey="Ethier, V" uniqKey="Ethier V">V Ethier</name>
</author>
<author>
<name sortKey="Zee, Ds" uniqKey="Zee D">DS Zee</name>
</author>
<author>
<name sortKey="Shadmehr, R" uniqKey="Shadmehr R">R Shadmehr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Braun, Da" uniqKey="Braun D">DA Braun</name>
</author>
<author>
<name sortKey="Aertsen, A" uniqKey="Aertsen A">A Aertsen</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
<author>
<name sortKey="Mehring, C" uniqKey="Mehring C">C Mehring</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nagengast, Aj" uniqKey="Nagengast A">AJ Nagengast</name>
</author>
<author>
<name sortKey="Braun, Da" uniqKey="Braun D">DA Braun</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zemel, Rs" uniqKey="Zemel R">RS Zemel</name>
</author>
<author>
<name sortKey="Dayan, P" uniqKey="Dayan P">P Dayan</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beck, Jm" uniqKey="Beck J">JM Beck</name>
</author>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Kiani, R" uniqKey="Kiani R">R Kiani</name>
</author>
<author>
<name sortKey="Hanks, T" uniqKey="Hanks T">T Hanks</name>
</author>
<author>
<name sortKey="Churchland, Ak" uniqKey="Churchland A">AK Churchland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Beck, Jm" uniqKey="Beck J">JM Beck</name>
</author>
<author>
<name sortKey="Latham, Pe" uniqKey="Latham P">PE Latham</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bedford, Fl" uniqKey="Bedford F">FL Bedford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bedford, Fl" uniqKey="Bedford F">FL Bedford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baily, Js" uniqKey="Baily J">JS Baily</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Welch, Rb" uniqKey="Welch R">RB Welch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vetter, P" uniqKey="Vetter P">P Vetter</name>
</author>
<author>
<name sortKey="Goodbody, Sj" uniqKey="Goodbody S">SJ Goodbody</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wigmore, V" uniqKey="Wigmore V">V Wigmore</name>
</author>
<author>
<name sortKey="Tong, C" uniqKey="Tong C">C Tong</name>
</author>
<author>
<name sortKey="Flanagan, Jr" uniqKey="Flanagan J">JR Flanagan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miall, Rc" uniqKey="Miall R">RC Miall</name>
</author>
<author>
<name sortKey="Jenkinson, N" uniqKey="Jenkinson N">N Jenkinson</name>
</author>
<author>
<name sortKey="Kulkarni, K" uniqKey="Kulkarni K">K Kulkarni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Krakauer, Jw" uniqKey="Krakauer J">JW Krakauer</name>
</author>
<author>
<name sortKey="Ghez, C" uniqKey="Ghez C">C Ghez</name>
</author>
<author>
<name sortKey="Ghilardi, Mf" uniqKey="Ghilardi M">MF Ghilardi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brashers Krug, T" uniqKey="Brashers Krug T">T Brashers-Krug</name>
</author>
<author>
<name sortKey="Shadmehr, R" uniqKey="Shadmehr R">R Shadmehr</name>
</author>
<author>
<name sortKey="Bizzi, E" uniqKey="Bizzi E">E Bizzi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shadmehr, R" uniqKey="Shadmehr R">R Shadmehr</name>
</author>
<author>
<name sortKey="Brashers Krug, T" uniqKey="Brashers Krug T">T Brashers-Krug</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiner, Mj" uniqKey="Weiner M">MJ Weiner</name>
</author>
<author>
<name sortKey="Hallett, M" uniqKey="Hallett M">M Hallett</name>
</author>
<author>
<name sortKey="Funkenstein, Hh" uniqKey="Funkenstein H">HH Funkenstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Redding, Gm" uniqKey="Redding G">GM Redding</name>
</author>
<author>
<name sortKey="Wallace, B" uniqKey="Wallace B">B Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Redding, Gm" uniqKey="Redding G">GM Redding</name>
</author>
<author>
<name sortKey="Wallace, B" uniqKey="Wallace B">B Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Redding, Gm" uniqKey="Redding G">GM Redding</name>
</author>
<author>
<name sortKey="Wallace, B" uniqKey="Wallace B">B Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ghahramani, Z" uniqKey="Ghahramani Z">Z Ghahramani</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
<author>
<name sortKey="Jordan, Mi" uniqKey="Jordan M">MI Jordan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Krakauer, Jw" uniqKey="Krakauer J">JW Krakauer</name>
</author>
<author>
<name sortKey="Pine, Zm" uniqKey="Pine Z">ZM Pine</name>
</author>
<author>
<name sortKey="Ghilardi, Mf" uniqKey="Ghilardi M">MF Ghilardi</name>
</author>
<author>
<name sortKey="Ghez, C" uniqKey="Ghez C">C Ghez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Braun, Da" uniqKey="Braun D">DA Braun</name>
</author>
<author>
<name sortKey="Aertsen, A" uniqKey="Aertsen A">A Aertsen</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
<author>
<name sortKey="Mehring, C" uniqKey="Mehring C">C Mehring</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carpenter, Rh" uniqKey="Carpenter R">RH Carpenter</name>
</author>
<author>
<name sortKey="Williams, Ml" uniqKey="Williams M">ML Williams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Flanagan, Jr" uniqKey="Flanagan J">JR Flanagan</name>
</author>
<author>
<name sortKey="Bittner, Jp" uniqKey="Bittner J">JP Bittner</name>
</author>
<author>
<name sortKey="Johansson, Rs" uniqKey="Johansson R">RS Johansson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Simani, Mc" uniqKey="Simani M">MC Simani</name>
</author>
<author>
<name sortKey="Mcguire, Lmm" uniqKey="Mcguire L">LMM McGuire</name>
</author>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Howard, Is" uniqKey="Howard I">IS Howard</name>
</author>
<author>
<name sortKey="Ingram, Jn" uniqKey="Ingram J">JN Ingram</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bishop, Cm" uniqKey="Bishop C">CM Bishop</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS Comput Biol</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">ploscomp</journal-id>
<journal-title-group>
<journal-title>PLoS Computational Biology</journal-title>
</journal-title-group>
<issn pub-type="ppub">1553-734X</issn>
<issn pub-type="epub">1553-7358</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">21483475</article-id>
<article-id pub-id-type="pmc">3068921</article-id>
<article-id pub-id-type="publisher-id">10-PLCB-RA-2593R3</article-id>
<article-id pub-id-type="doi">10.1371/journal.pcbi.1001112</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline">
<subject>Computational Biology/Systems Biology</subject>
<subject>Neuroscience/Behavioral Neuroscience</subject>
<subject>Neuroscience/Cognitive Neuroscience</subject>
<subject>Neuroscience/Motor Systems</subject>
<subject>Neuroscience/Experimental Psychology</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Inferring Visuomotor Priors for Sensorimotor Learning</article-title>
<alt-title alt-title-type="running-head">Priors over Visuomotor Transformations</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Turnham</surname>
<given-names>Edward J. A.</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Braun</surname>
<given-names>Daniel A.</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wolpert</surname>
<given-names>Daniel M.</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
</contrib>
</contrib-group>
<aff id="aff1">
<addr-line>Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge, United Kingdom</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Körding</surname>
<given-names>Konrad P.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">Northwestern University, United States of America</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>ejat3@cam.ac.uk</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: EJAT DAB DMW. Performed the experiments: EJAT. Analyzed the data: EJAT. Wrote the paper: EJAT DAB DMW.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<month>3</month>
<year>2011</year>
</pub-date>
<pmc-comment> Fake ppub added to accomodate plos workflow change from 03/2008 and 03/2009 </pmc-comment>
<pub-date pub-type="ppub">
<month>3</month>
<year>2011</year>
</pub-date>
<pub-date pub-type="epub">
<day>31</day>
<month>3</month>
<year>2011</year>
</pub-date>
<volume>7</volume>
<issue>3</issue>
<elocation-id>e1001112</elocation-id>
<history>
<date date-type="received">
<day>24</day>
<month>7</month>
<year>2010</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>2</month>
<year>2011</year>
</date>
</history>
<permissions>
<copyright-statement>Turnham et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
</permissions>
<abstract>
<p>Sensorimotor learning has been shown to depend on both prior expectations and sensory evidence in a way that is consistent with Bayesian integration. Thus, prior beliefs play a key role during the learning process, especially when only ambiguous sensory information is available. Here we develop a novel technique to estimate the covariance structure of the prior over visuomotor transformations – the mapping between actual and visual location of the hand – during a learning task. Subjects performed reaching movements under multiple visuomotor transformations in which they received visual feedback of their hand position only at the end of the movement. After experiencing a particular transformation for one reach, subjects have insufficient information to determine the exact transformation, and so their second reach reflects a combination of their prior over visuomotor transformations and the sensory evidence from the first reach. We developed a Bayesian observer model in order to infer the covariance structure of the subjects' prior, which was found to give high probability to parameter settings consistent with visuomotor rotations. Therefore, although the set of visuomotor transformations experienced had little structure, the subjects had a strong tendency to interpret ambiguous sensory evidence as arising from rotation-like transformations. We then exposed the same subjects to a highly-structured set of visuomotor transformations, designed to be very different from the set of visuomotor rotations. During this exposure the prior was found to have changed significantly to have a covariance structure that no longer favored rotation-like transformations. In summary, we have developed a technique which can estimate the full covariance structure of a prior in a sensorimotor task and have shown that the prior over visuomotor transformations favor a rotation-like structure. Moreover, through experience of a novel task structure, participants can appropriately alter the covariance structure of their prior.</p>
</abstract>
<abstract abstract-type="summary">
<title>Author Summary</title>
<p>When learning a new skill, such as riding a bicycle, we can adjust the commands we send to our muscles based on two sources of information. First, we can use sensory inputs to inform us how the bike is behaving. Second, we can use prior knowledge about the properties of bikes and how they behave in general. This prior knowledge is represented as a probability distribution over the properties of bikes. These two sources of information can then be combined by a process known as Bayes rule to identify optimally the properties of a particular bike. Here, we develop a novel technique to identify the probability distribution of a prior in a visuomotor learning task in which the visual location of the hand is transformed from the actual hand location, similar to when using a computer mouse. We show that subjects have a prior that tends to interpret ambiguous information about the task as arising from a visuomotor rotation but that experience of a particular set of visuomotor transformations can alter the prior.</p>
</abstract>
<counts>
<page-count count="13"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Uncertainty poses a fundamental problem for perception, action and decision-making. Despite our sensory inputs providing only a partial and noisy view of the world, and our motor outputs being corrupted by significant amounts of noise, we are able to both perceive and act on the world in what appears to be an efficient manner
<xref ref-type="bibr" rid="pcbi.1001112-Faisal1">[1]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Glimcher1">[2]</xref>
. The investigation of the computational principles that might underlie this capability has long been of interest to neuroscientists, behavioral economists and experimental psychologists. Helmholtz
<xref ref-type="bibr" rid="pcbi.1001112-Helmholtz1">[3]</xref>
was one of the first to propose that the brain might operate as an ‘inference machine’ by extracting perceptual information from uncertain sensory data through probabilistic estimation. This computational framework has now gained considerable experimental support and has recently led to the formulation of the ‘Bayesian brain’ hypothesis
<xref ref-type="bibr" rid="pcbi.1001112-Doya1">[4]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Knill1">[5]</xref>
. According to this hypothesis, the nervous system employs probabilistic internal models representing Bayesian probabilities about different states of the world that are updated in accordance with Bayesian statistics whenever new evidence is incorporated. Crucially, this update depends on two components: a
<italic>prior</italic>
that represents a statistical distribution over different possible states of the world, and the incoming
<italic>evidence</italic>
about the current state that is provided through noisy sensory data.</p>
<p>In the Bayesian framework the prior can have a strong impact on the update, with particular priors leading to inductive biases when confronted with insufficient information. Many perceptual biases have been explained as the influence of priors learned from the statistics of the real world, such as the prior for lower speed when interpreting visual motion
<xref ref-type="bibr" rid="pcbi.1001112-Weiss1">[6]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Stocker1">[7]</xref>
, the prior for lights to shine from above when interpreting object shape
<xref ref-type="bibr" rid="pcbi.1001112-Adams1">[8]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Langer1">[9]</xref>
and the prior that near-vertical visual stimuli are longer than horizontal stimuli
<xref ref-type="bibr" rid="pcbi.1001112-Howe1">[10]</xref>
. However, there are some phenomena such as the size-weight illusion – the smaller of two objects of equal weight feels heavier – that appear to act in the direction opposite to that expected from straightforward integration of the prior with sensory evidence
<xref ref-type="bibr" rid="pcbi.1001112-Flanagan1">[11]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Brayanov1">[12]</xref>
. Interestingly, despite the perceptual system thinking the smaller object is heavier, the motor system is not fooled as, after experience with the two objects, people generate identical forces when lifting them
<xref ref-type="bibr" rid="pcbi.1001112-Flanagan2">[13]</xref>
. Many cognitive biases can also be explained, not as errors in reasoning, but as the appropriate application of prior information
<xref ref-type="bibr" rid="pcbi.1001112-Kemp1">[14]</xref>
<xref ref-type="bibr" rid="pcbi.1001112-Acuna1">[16]</xref>
, and the Bayesian approach has been particularly successful in explaining human performance in cognitive tasks
<xref ref-type="bibr" rid="pcbi.1001112-Griffiths1">[17]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Sanborn1">[18]</xref>
.</p>
<p>In sensorimotor tasks, a number of studies have shown that when a participant is exposed to a task which has a fixed statistical distribution they incorporate this into their prior and combine it with new evidence in a way that is consistent with Bayesian estimation
<xref ref-type="bibr" rid="pcbi.1001112-Knill1">[5]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Krding1">[19]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Krding2">[20]</xref>
. Similarly, when several sources of evidence with different degrees of uncertainty have to be combined, for example a visual and a haptic cue, humans integrate the two sources of evidence by giving preference to the more reliable cue in quantitative agreement with Bayesian statistics
<xref ref-type="bibr" rid="pcbi.1001112-vanBeers1">[21]</xref>
<xref ref-type="bibr" rid="pcbi.1001112-Girshick1">[23]</xref>
. Moreover, computational models of motor control, such as optimal feedback control
<xref ref-type="bibr" rid="pcbi.1001112-Todorov1">[24]</xref>
<xref ref-type="bibr" rid="pcbi.1001112-Diedrichsen1">[27]</xref>
, are based on both Bayesian estimation and utility theory and have accounted for numerous phenomena in movement neuroscience such as variability patterns
<xref ref-type="bibr" rid="pcbi.1001112-Todorov1">[24]</xref>
, bimanual movement control
<xref ref-type="bibr" rid="pcbi.1001112-Diedrichsen2">[28]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Braun1">[29]</xref>
, task adaptation
<xref ref-type="bibr" rid="pcbi.1001112-Izawa1">[30]</xref>
<xref ref-type="bibr" rid="pcbi.1001112-Braun2">[32]</xref>
and object manipulation
<xref ref-type="bibr" rid="pcbi.1001112-Nagengast1">[33]</xref>
. There have also been several proposals for how such Bayesian processing may be implemented in neural circuits
<xref ref-type="bibr" rid="pcbi.1001112-Zemel1">[34]</xref>
<xref ref-type="bibr" rid="pcbi.1001112-Ma1">[36]</xref>
.</p>
<p>If one uses Bayesian estimation in an attempt to learn the parameters of a new motor task, the prior over the parameters will impact on the estimates. While previously priors have been either imposed on a motor task or assumed, there has been no paradigm that allows the natural prior distribution to be assessed in sensorimotor tasks. Here we develop a technique capable of estimating the prior over tasks.</p>
<p>We examine visuomotor transformations, in which a discrepancy is introduced between the hand's actual and visual locations, and estimate the prior over visuomotor transformations. Importantly, we are not simply trying to estimate the mean of the prior but its full covariance structure. Subjects made reaching movements which alternated between batches in which feedback of the hand's position was either veridical or had a visuomotor transformation applied to it. By exposing participants to a large range of visuomotor transformations we are able to fit a Bayesian observer model to estimate the prior. Our model assumes that at the start of each transformation batch a prior is used to instantiate the belief over visuomotor transformations and this is used to update the posterior after each trial of a transformation batch. The prior to which the belief is reset at the start of a transformation trial may change with experience. For our model we estimate the average prior used over an experimental session by assuming it is fixed within a session, as we expect the prior to only change slowly in response to the statistics of experience.</p>
<p>Our approach allows us to study the inductive biases of visuomotor learning in a quantitative manner within a Bayesian framework and to estimate the prior distribution over transformations. Having estimated the prior in one experimental session, we examine whether extensive training in two further sessions with a particular distribution of visuomotor transformations could alter the participants' prior.</p>
</sec>
<sec id="s2">
<title>Results</title>
<p>Subjects made reaching movements to targets presented in the horizontal plane, with feedback of the hand position projected into the plane of movement by a virtual-reality projection system only at the end of each reach (terminal feedback). Reaches were from a starting circle,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e001.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in front of the subject's chest, to a target randomly chosen from within a rectangle centred 11 cm from the starting circle (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e002.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in front of the chest). Subjects made reaching movements in batches which were alternately veridical and transformed (
<xref ref-type="fig" rid="pcbi-1001112-g001">Figure 1</xref>
top, see
<xref ref-type="sec" rid="s4">Methods</xref>
for full details). In a veridical batch, the cursor was always aligned with the hand. In a transformation batch, subjects experienced a visuomotor transformation that remained constant throughout the batch and in which the terminal-feedback cursor position (
<bold>v</bold>
) was a linear transformation (specified by transformation matrix
<bold>T</bold>
) of the final hand position (
<bold>h</bold>
) relative to the (constant) starting point of the reaches:
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e003.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. In component form, this can be written as
<disp-formula>
<graphic xlink:href="pcbi.1001112.e004"></graphic>
</disp-formula>
</p>
<fig id="pcbi-1001112-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.g001</object-id>
<label>Figure 1</label>
<caption>
<title>The experimental design.</title>
<p>Each session alternated between veridical and transformed batches of trials. Each subject participated in three sessions, the first using an uncorrelated distribution of transformations, and the second and third using a correlated distribution. The joint distributions of
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e005.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e006.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are plotted.</p>
</caption>
<graphic xlink:href="pcbi.1001112.g001"></graphic>
</fig>
<p>where we define the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e007.jpg" mimetype="image"></inline-graphic>
</inline-formula>
coordinates as (left-right, backward-forwards) relative to the subject. Each transformed batch used a different transformation. The number of transformations experienced was at least 108 for each subject in each of three experimental sessions (mean 147 transforms,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e008.jpg" mimetype="image"></inline-graphic>
</inline-formula>
; see
<xref ref-type="table" rid="pcbi-1001112-t001">Table 1</xref>
). Transformation batches contained at least three trials (mean length: 4.9 trials,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e009.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and generally continued until a target had been hit (achieved on 91% of batches). Veridical batches always continued until a target had been hit (mean length: 1.4 trials,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e010.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). The purpose of the veridical batches was to wash out short-term learning. Transformed trials were distinguished from veridical trials by the color of the targets, so that the onset of a new transformation was clear to the subjects. The length of a session was on average 921 trials (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e011.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and lasted 82 minutes (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e012.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Subjects performed three experimental sessions on different days. The transformations used in Session 1 were drawn from an ‘uncorrelated’ distribution so as to minimize pairwise correlations between elements of the transformation matrix. The transformations used in Session 2 & 3 were drawn from a ‘correlated’ distribution to examine whether this would change subjects' priors (see
<xref ref-type="fig" rid="pcbi-1001112-g001">Figure 1</xref>
bottom).</p>
<table-wrap id="pcbi-1001112-t001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.t001</object-id>
<label>Table 1</label>
<caption>
<title>The experimental subjects.</title>
</caption>
<alternatives>
<graphic id="pcbi-1001112-t001-1" xlink:href="pcbi.1001112.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td colspan="3" align="left" rowspan="1">Session 1</td>
<td colspan="3" align="left" rowspan="1">Session 2</td>
<td colspan="2" align="left" rowspan="1">Session 3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Subject</td>
<td align="left" rowspan="1" colspan="1">Transforms</td>
<td align="left" rowspan="1" colspan="1">Trials</td>
<td align="left" rowspan="1" colspan="1">Delay</td>
<td align="left" rowspan="1" colspan="1">Transforms</td>
<td align="left" rowspan="1" colspan="1">Trials</td>
<td align="left" rowspan="1" colspan="1">Delay</td>
<td align="left" rowspan="1" colspan="1">Transforms</td>
<td align="left" rowspan="1" colspan="1">Trials</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">120</td>
<td align="left" rowspan="1" colspan="1">745</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">118</td>
<td align="left" rowspan="1" colspan="1">786</td>
<td align="left" rowspan="1" colspan="1">9</td>
<td align="left" rowspan="1" colspan="1">120</td>
<td align="left" rowspan="1" colspan="1">850</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">150</td>
<td align="left" rowspan="1" colspan="1">947</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">150</td>
<td align="left" rowspan="1" colspan="1">830</td>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">200</td>
<td align="left" rowspan="1" colspan="1">1102</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">144</td>
<td align="left" rowspan="1" colspan="1">827</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">150</td>
<td align="left" rowspan="1" colspan="1">860</td>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">180</td>
<td align="left" rowspan="1" colspan="1">977</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">133</td>
<td align="left" rowspan="1" colspan="1">944</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">140</td>
<td align="left" rowspan="1" colspan="1">929</td>
<td align="left" rowspan="1" colspan="1">9</td>
<td align="left" rowspan="1" colspan="1">160</td>
<td align="left" rowspan="1" colspan="1">1075</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">150</td>
<td align="left" rowspan="1" colspan="1">871</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">150</td>
<td align="left" rowspan="1" colspan="1">838</td>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">206</td>
<td align="left" rowspan="1" colspan="1">1076</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">140</td>
<td align="left" rowspan="1" colspan="1">970</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">124</td>
<td align="left" rowspan="1" colspan="1">928</td>
<td align="left" rowspan="1" colspan="1">9</td>
<td align="left" rowspan="1" colspan="1">155</td>
<td align="left" rowspan="1" colspan="1">1117</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">160</td>
<td align="left" rowspan="1" colspan="1">1090</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">151</td>
<td align="left" rowspan="1" colspan="1">1035</td>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">144</td>
<td align="left" rowspan="1" colspan="1">955</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">133</td>
<td align="left" rowspan="1" colspan="1">861</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">108</td>
<td align="left" rowspan="1" colspan="1">731</td>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">134</td>
<td align="left" rowspan="1" colspan="1">762</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>The number of transformations and trials in each experimental session, and the lengths of the delay in days between sessions.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<sec id="s2a">
<title>Initial analysis</title>
<p>
<xref ref-type="fig" rid="pcbi-1001112-g002">Figure 2</xref>
shows the starting location and rectangle in which the targets could appear together with 50 examples of ‘perturbation vectors’ that join the hand position on the first trial of a transformation batch to the displayed cursor position (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e013.jpg" mimetype="image"></inline-graphic>
</inline-formula>
where
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e014.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the trial index, in this case 1). On the first trial of each transformation batch, the ‘target-hand vector’ joining the centre of the target
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e015.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to the final position of the hand
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e016.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(the ‘target-hand vector’
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e017.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) was shorter than 3 cm in 90% of cases (
<xref ref-type="fig" rid="pcbi-1001112-g003">Figure 3</xref>
, column A, top panel), suggesting that the preceding veridical batches had washed out most of the learning. Subjects were instructed that on the second and subsequent trials of each transformation batch, they should attempt to compensate for the transformation in order to hit the target with the cursor. Hence on trials 2 and 3, the proportion of final hand positions within 3 cm of the target drops to 43% (middle panel of
<xref ref-type="fig" rid="pcbi-1001112-g003">Figure 3</xref>
, column A) and 36% (bottom panel), respectively. Further analysis suggests that the increase in length of the target-hand vectors on trials 2 and 3 is due to subjects attempting to counter the transformation, rather than just exploring the workspace randomly.
<xref ref-type="fig" rid="pcbi-1001112-g003">Figure 3</xref>
, column B shows that the direction of the target-hand vector tends to be opposite to that of the perturbation vector experienced on the previous trial, while column C shows that the lengths of these two vectors are positively correlated. The ratio of the length of the target-hand vector on the second trial to that of the perturbation vector on the first trial gives a measure of the extent of the adaptation induced by the experience on the first trial, with a value of zero suggesting no adaptation. We regressed this adaptation measure for all subjects and sessions (removing a few outliers – 0.34% – where this measure was greater than 5) against the absolute angular difference between the direction of the first and second targets, in order to test the assumption made later in our modelling that adaptation generalizes across the workspace. If there were a local generalization function with a decay based on target direction we would expect that the greater the angular difference the smaller the adaptation measure. The fit had a slope which was not significantly different from zero (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e018.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) suggesting global generalization.</p>
<fig id="pcbi-1001112-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Target area and example perturbation vectors.</title>
<p>The starting point of the reaches (1 cm radius circle) and the area from which the centres of targets were drawn (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e019.jpg" mimetype="image"></inline-graphic>
</inline-formula>
cm rectangle: not displayed to the subject) are shown, in addition to ‘perturbation vectors’ from subjects' hand positions to the corresponding cursor positions on the first trials of 50 example transformations from Session 1.</p>
</caption>
<graphic xlink:href="pcbi.1001112.g002"></graphic>
</fig>
<fig id="pcbi-1001112-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Analysis of hand positions across the trials of a transformation batch.</title>
<p>Column
<bold>A</bold>
shows the distribution (across all subjects and sessions) of the ‘target-hand vector’ representing the position of the hand relative to the target,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e020.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, separately for trials 1, 2 & 3 of a transformation batch. Columns
<bold>B</bold>
and
<bold>C</bold>
show the relation between the target-hand vector and the ‘perturbation vector’ from hand to cursor on the previous trial,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e021.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Column B gives the distribution of the angle between the two vectors, and Column C plots the lengths of the vectors against each other. Columns
<bold>D</bold>
and
<bold>E</bold>
make the same comparisons between the target-hand vector and the target-hand vector that would place the cursor on the target,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e022.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Column D gives the distribution of the angle between the two vectors, and Column E plots the lengths of the vectors against each other.</p>
</caption>
<graphic xlink:href="pcbi.1001112.g003"></graphic>
</fig>
<p>Compensatory responses tend to be in the correct direction: Column D shows that target-hand vectors on trials 2 and 3 tend to be in the same direction as the target-hand vector that would place the cursor on the target (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e023.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), and column E shows that the lengths of these two vectors are also positively correlated. This suggests that subjects are adapting within a batch so as to compensate for the induced perturbation.</p>
</sec>
<sec id="s2b">
<title>Bayesian observer model</title>
<p>We fit subjects' performance on the first two trials of each transformed batch using a Bayesian observer model in which we assume subjects attempt to estimate the four parameters (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e024.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e025.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e026.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, &
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e027.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) of the transformation matrix. We represent the subject's prior as a four-dimensional multivariate Gaussian distribution over these four parameters, centred on the identity transformation (since subjects naturally expect the visual location of the hand to match its actual location). Our inference problem is to determine the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e028.jpg" mimetype="image"></inline-graphic>
</inline-formula>
covariance matrix of this prior.
<xref ref-type="fig" rid="pcbi-1001112-g004">Figure 4</xref>
includes a schematic of a prior with the four-dimensional distribution shown as six two-dimensional marginalizations with isoprobability ellipses (blue), representing the relation between all possible pairings of the four elements of the transformation matrix.</p>
<fig id="pcbi-1001112-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Schematic of the Bayesian observer model.</title>
<p>The plots show six 2-dimensional views of the 4-dimensional probability space of the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e029.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e030.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e031.jpg" mimetype="image"></inline-graphic>
</inline-formula>
&
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e032.jpg" mimetype="image"></inline-graphic>
</inline-formula>
parameters of the transformation matrix. The Gaussian prior is shown in blue (marginalised 1 s.d. isoprobability ellipses). On the first trial the evidence the subject receives (for simplicity shown here as noiseless) does not fully specify the transformation uniquely, and the transformations consistent with this evidence are shown in gray. This evidence (as a likelihood) is combined with the prior to give the posterior after the first trial (red ellipses: these are shown calculated from the noisy visual feedback) and the MAP of this posterior is taken as the estimate of the transformation. The cross shows the position of the actual transformation matrix used in generating the first-trial evidence.</p>
</caption>
<graphic xlink:href="pcbi.1001112.g004"></graphic>
</fig>
<p>An optimal observer would integrate this prior with information received on the first trial (hand position and visual feedback of hand position) to generate a posterior over transformations. Even if there were no noise in proprioception or vision, the information from the first trial would not uniquely specify the underlying transformation. For example, for a particular feedback on the first trial the evidence is compatible with many settings of the four parameters (grey lines and planes in
<xref ref-type="fig" rid="pcbi-1001112-g004">Figure 4</xref>
). Therefore, given the inherent ambiguity (and noise in sensory inputs), the estimated transformation depends both on the sensory evidence and prior which together can be used to generate a posterior distribution over the four parameters of the transformation matrix (
<xref ref-type="fig" rid="pcbi-1001112-g004">Figure 4</xref>
, red ellipses). Our Bayesian observer then uses the most probable transformation (the MAP estimate is the centre of the red ellipses in
<xref ref-type="fig" rid="pcbi-1001112-g004">Figure 4</xref>
) to determine where to point on the second trial. Our aim is to infer the prior distribution for each subject in each experimental session by fitting the pointing location on the second trial based on the experience on the first trial. The model assumes the observer starts each transformation batch within a session with the same prior distribution, although this distribution will of course be updated during each batch by combination with evidence. As shown above, these updates are washed out between batches through the interleaved veridical batches.</p>
</sec>
<sec id="s2c">
<title>Session 1</title>
<p>In Session 1, transformations were sampled so as to minimize pairwise correlations between elements of the transformation matrix. This ‘uncorrelated’ distribution was designed to avoid inducing learning of new correlations. The set of transformations experienced in the first session is shown in the top-left cell of
<xref ref-type="fig" rid="pcbi-1001112-g005">Figure 5</xref>
, viewed in the same six projections of the four-dimensional space used in
<xref ref-type="fig" rid="pcbi-1001112-g004">Figure 4</xref>
. The Gaussian priors fit to each of the eight subjects' data in Session 1 are shown in the middle-left cell of
<xref ref-type="fig" rid="pcbi-1001112-g005">Figure 5</xref>
. For some pairs of elements of the transformation matrix (e.g.
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e033.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) the prior appears to show little correlation whereas for others (e.g.
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e034.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) there appears to be a stronger correlation. To quantify these relations we examined the correlation coefficients between each pair of elements of the transformation matrix across the subjects. First, to examine the consistency of the correlation across subjects we tested the null hypothesis that subjects' correlation coefficients were uniformly distributed between
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e035.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e036.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(Kolmogorov-Smirnov test). We found that only between elements
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e037.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e038.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was the correlation significantly consistent (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e039.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). In addition we used a t-test to examine whether the correlations across subjects were significantly different from zero (although correlations are strictly speaking not normally distributed). We found that only the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e040.jpg" mimetype="image"></inline-graphic>
</inline-formula>
correlation was significant (mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e041.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e042.jpg" mimetype="image"></inline-graphic>
</inline-formula>
).</p>
<fig id="pcbi-1001112-g005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Distributions of transformations and prior distributions in Sessions 1 and 2.</title>
<p>Left column: Session 1. Right column: Session 2. Top row: the distributions of transformations in the two sessions. In each case 700 of the experimental transformations are plotted in the six projections of the 4-D space of linear transformations used in
<xref ref-type="fig" rid="pcbi-1001112-g004">Figure 4</xref>
. Middle row: the priors fit to the data of the 8 subjects, plotted in the style used for the priors in
<xref ref-type="fig" rid="pcbi-1001112-g004">Figure 4</xref>
. Each covariance matrix has been scaled so that its largest eigenvalue is unity, in order that all priors can be displayed together without any being too small to see. Bottom row: confidence limits on covariance orientation angles, shown for each pairing of the four elements of the transformation matrix
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e043.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e044.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e045.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e046.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. These confidence limits were obtained by bootstrapping, as explained in
<xref ref-type="sec" rid="s4">Methods</xref>
. For each subject, thick lines show the mean angle across the 1000 or more resampled fits. Thin lines, connected to the mean line by curved arrows, give the 95% confidence limits. Only the range
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e047.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e048.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is labelled, because the data is axial and therefore only exists in a 180
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e049.jpg" mimetype="image"></inline-graphic>
</inline-formula>
range.</p>
</caption>
<graphic xlink:href="pcbi.1001112.g005"></graphic>
</fig>
<p>We also analyzed the orientations of these covariance ellipses. Confidence limits on the orientation angle of the long axis of each ellipse were obtained by bootstrapping. The bottom-left cell of
<xref ref-type="fig" rid="pcbi-1001112-g005">Figure 5</xref>
shows, for each subject, the mean angle (thick line) and the 95% confidence limits (thin lines connected by curved arrows). The
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e050.jpg" mimetype="image"></inline-graphic>
</inline-formula>
confidence limits are exclusively in the negative range for all but two subjects, while for all other pairings of elements confidence limits for most subjects overlap the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e051.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e052.jpg" mimetype="image"></inline-graphic>
</inline-formula>
points indicative of an absence of correlation. The mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e053.jpg" mimetype="image"></inline-graphic>
</inline-formula>
angle
<italic>across</italic>
subjects was
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e054.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(95% confidence limits obtained by bootstrapping of the best fits:
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e055.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e056.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). We also found that the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e057.jpg" mimetype="image"></inline-graphic>
</inline-formula>
covariance angle was significantly positive (mean across subjects
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e058.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, confidence limits
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e059.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e060.jpg" mimetype="image"></inline-graphic>
</inline-formula>
).</p>
</sec>
<sec id="s2d">
<title>Sessions 2 and 3</title>
<p>Each subject participated in Session 2 between three and six days after Session 1, and in Session 3 between seven and nine days after Session 2 (
<xref ref-type="table" rid="pcbi-1001112-t001">Table 1</xref>
). These sessions both used a set of transformations whose distribution was chosen so as to be very different from the subjects' priors measured in Session 1. This allowed us to examine whether we could change subjects' priors through experience. As subjects had priors with a strong negative correlation between elements
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e061.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e062.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the transformation matrix we used a ‘correlated distribution’ over transformations in which the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e063.jpg" mimetype="image"></inline-graphic>
</inline-formula>
correlation was set to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e064.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, with an orientation angle of
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e065.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<xref ref-type="fig" rid="pcbi-1001112-g005">Figure 5</xref>
, top-right cell). Importantly, the two distributions used in Session 1 and in Sessions 2 & 3 were designed so that the distribution of evidence (that is the relation between visual and actual hand locations) shown on the first trial of each transformation batch was identical under the two distributions (see
<xref ref-type="sec" rid="s4">Methods</xref>
). Therefore any changes in behavior on the second trial (which we use to estimate the prior) arose because of changes in the subject's prior. The remainder of the trials within a batch have different statistics between Session 1 and Sessions 2 & 3, so we did not use data beyond trial 2 to estimate the prior, although this could be used by the subjects to alter their internal prior.</p>
<p>The priors fit to the data of the five subjects in Session 2 are shown in the middle-right cell of
<xref ref-type="fig" rid="pcbi-1001112-g005">Figure 5</xref>
. We found that in Session 2 the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e066.jpg" mimetype="image"></inline-graphic>
</inline-formula>
correlations across subjects were now not significantly different from zero (mean correlation coefficient
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e067.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e068.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, t-test) and were not distributed significantly non-uniformly across subjects (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e069.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, K-S test). Confidence limits (
<xref ref-type="fig" rid="pcbi-1001112-g005">Figure 5</xref>
, bottom-right cell) on the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e070.jpg" mimetype="image"></inline-graphic>
</inline-formula>
covariance angle now overlapped
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e071.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for all but one subject, again implying the absence of correlation. Confidence limits on the mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e072.jpg" mimetype="image"></inline-graphic>
</inline-formula>
covariance angle across subjects overlapped
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e073.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e074.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e075.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e076.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). A weak but significant
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e077.jpg" mimetype="image"></inline-graphic>
</inline-formula>
correlation was now found (mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e078.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e079.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on t-test and K-S test), and the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e080.jpg" mimetype="image"></inline-graphic>
</inline-formula>
covariance angle continued to be positive (mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e081.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, confidence limits
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e082.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e083.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), although angles were not significant for any individual subject.</p>
<p>In Session 3 (see
<xref ref-type="fig" rid="pcbi-1001112-g006">Figure 6</xref>
, which summarises changes in the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e084.jpg" mimetype="image"></inline-graphic>
</inline-formula>
relation across sessions) the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e085.jpg" mimetype="image"></inline-graphic>
</inline-formula>
correlation was still not significant (mean correlation coefficient
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e086.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e087.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on t-test and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e088.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on K-S test). The covariance angle confidence limits now overlapped zero within all subjects and across subjects (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e089.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e090.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e091.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). A weak but significant
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e092.jpg" mimetype="image"></inline-graphic>
</inline-formula>
correlation was again found (mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e093.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e094.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on t-test and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e095.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on K-S test), and the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e096.jpg" mimetype="image"></inline-graphic>
</inline-formula>
covariance angle continued to be positive (mean across subjects
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e097.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, confidence limits
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e098.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e099.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), although angles were only significant for three individual subjects.</p>
<fig id="pcbi-1001112-g006" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Evolution of the
<italic>b</italic>
-
<italic>c</italic>
relationship.</title>
<p>The top line shows the best fits in each of the experimental sessions, for each of the eight subjects; the middle line shows means and confidence limits on the covariance orientation angles. The bottom-left graph shows the mean across subjects of the orientation angles from the best fits to each subject's data, with 95% confidence limits on the mean found by bootstrapping.</p>
</caption>
<graphic xlink:href="pcbi.1001112.g006"></graphic>
</fig>
</sec>
<sec id="s2e">
<title>Model comparison</title>
<p>To assess the extent to which our Bayesian observer model explained the data, we compared the magnitudes of its errors in predicting hand positions to the errors made by four other models: (A) the ‘no-adaptation’ model, which assumes the hand hits the centre of the target on all trials; (B) the ‘shift’ model, which is also a Bayesian observer but assumes the transformation is a translation; (C) the ‘rotation & uniform scaling’ model, another Bayesian observer that assumes the transformation is a rotation combined with a scaling; (D) the ‘affine’ model, which is a Bayesian observer more general than the standard model in that it accounts for linear transformations combined with shifts. Comparisons of hand position prediction error were made for each trial of a transformed batch from the 2nd to the 7th, although it should be remembered that trials after the 3rd represent progressively fewer batches, with only 44% of batches lasting to the 4th trial and only 19% lasting to the 7th. The Bayesian observer models integrated information about a transformation from all previous trials of a batch when making a prediction for the next trial. Since the Bayesian observer models were all fit to data from the second trials of each transformed batch (i.e. the standard model used the fits presented above), comparison of prediction errors on the second trials themselves was done using 10-fold cross-validation for these models, in order to avoid over-fitting by complex models.</p>
<p>To compare the models we focus on trial 3, which is late enough that the subjects have received a considerable amount of information about the transformation (just enough to specify the whole transformation matrix, in noiseless conditions) but early enough that all batches can be included.
<xref ref-type="fig" rid="pcbi-1001112-g007">Figure 7</xref>
shows that on this trial the standard model makes smaller prediction errors for the hand positions (averaged across all sessions) than any other model. The next-best is the affine model (mean error 4.50 cm, versus 4.34 for the linear model). On all other trials, the linear model is also superior to all other models. The failure of the affine model to perform better than the standard model shows that its extra complexity, which allows it to account for shifts, is not necessary. Accounting for shifts made little difference to the linear components of the fits: the correlation coefficients between pairs of elements of the transformation matrix were very similar to those in the linear model fits (median absolute difference across all pairs: 0.11), and the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e100.jpg" mimetype="image"></inline-graphic>
</inline-formula>
coefficients were again significantly negative in Session 1 (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e101.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on t-test and Kolmogorov-Smirnov test) and ceased to be significantly different from zero in Sessions 2 and 3. The covariance angles between pairs of elements were also very similar to those in the linear model fits (median absolute difference:
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e102.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), and the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e103.jpg" mimetype="image"></inline-graphic>
</inline-formula>
angles were significantly negative in Session 1 (95% confidence limits:
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e104.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e105.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and ceased to be significantly negative in Sessions 2 and 3.</p>
<fig id="pcbi-1001112-g007" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.g007</object-id>
<label>Figure 7</label>
<caption>
<title>Comparison of standard linear model against other plausible models.</title>
<p>Models are compared on the basis of their mean error, across subjects and sessions, in predicting subjects' hand positions on trials 2–7 of transformation batches. For each trial, all batches that lasted for at least that number of trials are used. Errors are capped at 20 cm before averaging, to reduce the effect of outliers. Trial 2 values are computed using 10-fold cross-validation, and later trial values are computed using fits to all transformation batches.</p>
</caption>
<graphic xlink:href="pcbi.1001112.g007"></graphic>
</fig>
<p>We also varied the origin of the linear transformations that we used in the Bayesian observer model, to see if the coordinate system used by the experimental subjects was based around the starting point of the reaches (small circle in
<xref ref-type="fig" rid="pcbi-1001112-g008">Figure 8</xref>
), or about some other location such as the eyes (cross in
<xref ref-type="fig" rid="pcbi-1001112-g008">Figure 8</xref>
). The shading in
<xref ref-type="fig" rid="pcbi-1001112-g008">Figure 8</xref>
represents the fitting error and shows that using the starting point of the reaches as the origin fits the data considerably better than any other position tested (mean error: 3.49 cm for the starting point, versus 3.61 cm for the next best position). In particular, a repeated-measures ANOVA (using subject number and session as the other two factors) shows that using the starting point as origin gives significantly lower errors than using the eye position (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e106.jpg" mimetype="image"></inline-graphic>
</inline-formula>
).</p>
<fig id="pcbi-1001112-g008" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.g008</object-id>
<label>Figure 8</label>
<caption>
<title>Comparison of possible linear transformation origins for the Bayesian observer model.</title>
<p>For each small square the shading denotes the performance of the standard Bayesian observer model when the origin of the linear transformations is set to the centre of that square. Performance is measured using the error between modelled and measured second-trial hand positions, averaged within an experimental session for one subject (after capping all errors at 20 cm) and then averaged across all subjects and all sessions. The small circle shows the start point of the reaches, which is used as the origin in all other modelling. The cross shows the approximate position of the eyes (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e107.jpg" mimetype="image"></inline-graphic>
</inline-formula>
cm).</p>
</caption>
<graphic xlink:href="pcbi.1001112.g008"></graphic>
</fig>
</sec>
</sec>
<sec id="s3">
<title>Discussion</title>
<p>By exposing participants to numerous linear transformations (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e108.jpg" mimetype="image"></inline-graphic>
</inline-formula>
transformation matrices) in a virtual-reality reaching task in the horizontal plane we were able to estimate the prior subjects have over visuomotor transformations. After a new transformation had been experienced for a single trial, we fit the prior in a Bayesian observer model so as to best account for the subsequent reach. That is, for the subject the first reach provides a likelihood which together with his prior leads to a posterior over visuomotor transformations, the maximum of which determines his second reach. While the mean of the prior is assumed to be the identity transformation (vision of the hand is expected to be where the hand really is), we found the estimated prior to have a covariance structure with a strong negative correlation between the off-diagonal elements of the transformation matrix. We then exposed the participants in two further sessions to visuomotor transformations from a distribution that had a positive correlation between these off-diagonal elements (hence the opposite correlation structure to the prior), and remeasured the prior. The estimated prior had changed significantly in that there was now no correlation between the off-diagonal elements, demonstrating learning.</p>
<p>Our study has three key novel features. First, we have developed a technique which can, unlike previous paradigms, estimate the full covariance structure of a prior in a sensorimotor task. Second, we have shown that for our task the prior over visuomotor transformations favors rotation-like structures. Third, we have shown that through experience of a novel correlation structure between the task parameters, participants appropriately alter the covariance structure of their prior.</p>
<sec id="s3a">
<title>Measuring the prior</title>
<p>Previous studies have attempted to determine the natural co-ordinate system used for visuomotor transformations. The dominant paradigm has been to expose subjects to a limited alteration in the visuomotor map and examine generalisation to novel locations in the workspace. These studies show that when a single visual location is remapped to a new proprioceptive location, the visuomotor map shows extensive changes throughout the workspace when examined in one-dimensional
<xref ref-type="bibr" rid="pcbi.1001112-Bedford1">[37]</xref>
<xref ref-type="bibr" rid="pcbi.1001112-Welch1">[40]</xref>
and in three-dimensional tasks
<xref ref-type="bibr" rid="pcbi.1001112-Vetter1">[41]</xref>
. These studies are limited in two ways in their ability to examine the prior over visuomotor transformations. First, they only examine how subjects generalize after experiencing one (or a very limited set of) alterations between visual and proprioceptive inputs. As such the results may depend on the particular perturbation chosen. Second, while the generalization to novel locations can provide information about the co-ordinate system used, it provides no information about the covariance structure of the prior. Our paradigm is able to address both these limitations using many novel visual-proprioceptive mappings to estimate the full covariance structure of the prior over visuomotor transformations.</p>
<p>To study this covariance structure in the fitted priors, we analyzed both the correlation coefficients between elements of the transformation matrix – as a measure of the strength of the relationship between elements – and also the orientation of the covariance ellipses of pairs of elements – as a measure of the slope of the relationship. A significant strong negative correlation was seen between the off-diagonal elements of the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e109.jpg" mimetype="image"></inline-graphic>
</inline-formula>
transformation matrices in the priors found in Session 1. Such a relation is found in a rotation matrix,
<disp-formula>
<graphic xlink:href="pcbi.1001112.e110"></graphic>
</disp-formula>
</p>
<p>as this corresponds to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e111.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e112.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in our transformation matrix. This similarity suggests a bias for subjects to interpret transformations as conforming to rotation-like structures. The
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e113.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e114.jpg" mimetype="image"></inline-graphic>
</inline-formula>
relations would still exist if a rotation were combined with a uniform scaling. We do not claim that subjects believe the transformations to be only rotations and uniform scalings. If they did, we should have found a
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e115.jpg" mimetype="image"></inline-graphic>
</inline-formula>
relationship between
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e116.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e117.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in the prior and a strong
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e118.jpg" mimetype="image"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e119.jpg" mimetype="image"></inline-graphic>
</inline-formula>
relationship, but the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e120.jpg" mimetype="image"></inline-graphic>
</inline-formula>
covariance angle was around
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e121.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e122.jpg" mimetype="image"></inline-graphic>
</inline-formula>
correlation was weak. Rather, it seems likely that the subjects believed many of the transformations in Session 1 to be rotations combined with other perturbations.</p>
<p>Vetter and colleagues
<xref ref-type="bibr" rid="pcbi.1001112-Vetter1">[41]</xref>
also found an apparent bias for rotations. However, these were rotations about the eyes, whereas the centre of the coordinate system in our model is the starting circle, approximately 30 cm in front of the eyes. We showed that our subjects' data across all sessions is best explained using the starting circle as the origin of transformations, rather than the eyes or any other location (
<xref ref-type="fig" rid="pcbi-1001112-g008">Figure 8</xref>
). The two studies are not contradictory, because our subjects were shown the cursor on top of the start circle at the start and end of every trial, and so would have been likely to learn that it was the origin of the transformations.</p>
<p>Importantly, to measure the prior we ensured that the distribution of transformations in the first session was relatively unstructured in the space of the four elements of the transformation matrix, and in particular the distribution of transformations used had only a very small correlation between the off-diagonal elements. Therefore, it is unlikely (particularly given the adaptation results discussed below) that the prior for rotations came about because of the particular set of transformations used in our paradigm.</p>
<p>Our approach of probing a subject's prior with many transformations would be disrupted if the learning of these transformations interfered with each other. Many studies have shown interference between the learning of similar but opposing visuomotor perturbations
<xref ref-type="bibr" rid="pcbi.1001112-Wigmore1">[42]</xref>
<xref ref-type="bibr" rid="pcbi.1001112-Krakauer1">[44]</xref>
, similar to that found between two dynamic perturbations
<xref ref-type="bibr" rid="pcbi.1001112-BrashersKrug1">[45]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Shadmehr1">[46]</xref>
. However, subjects in those experiments were trained for dozens of trials on each perturbation; learning of individual transformations over just a few trials in our experiment would have been much less resilient to overwriting with new memories. Additionally, the veridical batches between each transformation in our experiment would have washed out any
<italic>perceptual</italic>
or
<italic>non-cognitive</italic>
component of learning
<xref ref-type="bibr" rid="pcbi.1001112-Bedford2">[38]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Weiner1">[47]</xref>
<xref ref-type="bibr" rid="pcbi.1001112-Redding3">[50]</xref>
.</p>
<p>The previous work on visuomotor generalization cited above
<xref ref-type="bibr" rid="pcbi.1001112-Bedford1">[37]</xref>
<xref ref-type="bibr" rid="pcbi.1001112-Baily1">[39]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Vetter1">[41]</xref>
, which found that experiencing single visual-proprioceptive pairs induced remapping throughout the workspace, justifies the assumption made in the analysis of the current study that perturbations experienced at one location will induce adaptive responses throughout the workspace. In addition, our analysis shows that the magnitude of the adaptive response on the second trial does not decrease with the angular deviation of the second target from the first, providing further support for global generalization under terminal feedback. Another reaching study
<xref ref-type="bibr" rid="pcbi.1001112-Ghahramani1">[51]</xref>
found much more limited generalization across locations, but was criticized
<xref ref-type="bibr" rid="pcbi.1001112-Vetter1">[41]</xref>
on the grounds that the starting point of reaches was not controlled, and that subjects were constrained to make unnatural reaching movements at the height of the shoulder. Work with visual feedback of the hand position throughout the reach has found that scalings are generalized throughout the workspace but rotations are learned only locally
<xref ref-type="bibr" rid="pcbi.1001112-Krakauer2">[52]</xref>
. This lack of generalization is clearly at odds with the weight of evidence from terminal-feedback studies. The difference is perhaps due to differing extents of cognitive adaptation under the two feedback conditions.</p>
</sec>
<sec id="s3b">
<title>Altering the prior</title>
<p>Recent studies have shown that when exposed to tasks that follow a structured distribution, subjects can learn this structure and use it to facilitate learning of novel tasks corresponding to the structure
<xref ref-type="bibr" rid="pcbi.1001112-Braun3">[53]</xref>
. In the current study, when participants were exposed to a structured distribution of transformations in Sessions 2 & 3 we found that participants' priors changed to become closer to the novel distribution. The estimated prior's negative correlation between the off-diagonal elements observed in the Session 1 priors was abolished by training on a distribution of transformations in which these off-diagonal elements were set to be equal and therefore perfectly positively correlated. This abolition in the fitted priors is evidenced both by the orientations of the covariance ellipses between the off-diagonal elements, which became clustered around
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e123.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and by the correlation coefficients for this pair of elements, which also clustered around zero. Importantly, the perturbations on the first reach of each transformed batch in Sessions 2 & 3 were generated identically to those in Session 1 so that we can be sure it is the prior that has changed, as the evidence shown to the subject was identically distributed and only varied in terms of the feedback on the second and subsequent trials.</p>
<p>Previous studies have also demonstrated the ability of people to learn priors over novel sensorimotor tasks. For instance, one study showed that subjects learned a non-zero-mean Gaussian prior over horizontal shifts
<xref ref-type="bibr" rid="pcbi.1001112-Krding1">[19]</xref>
, while reaction-time studies
<xref ref-type="bibr" rid="pcbi.1001112-Carpenter1">[54]</xref>
succeeded in teaching subjects non-uniform prior distributions over potential targets for a saccade. Similarly, other studies have shown that priors, such as the relation between size and weight
<xref ref-type="bibr" rid="pcbi.1001112-Flanagan3">[55]</xref>
and over the direction of light sources in determining shape from shading
<xref ref-type="bibr" rid="pcbi.1001112-Adams1">[8]</xref>
, can be adapted through experience of a training set which differs from the normal prior. In many of these previous studies only the mean of the learned prior was measured, and the priors were generally one-dimensional whereas in the current study we expose subjects to distributions in which there is a novel and multi-dimensional covariance structure. This difference in dimensionality may also explain why a one-dimensional structure of visuomotor rotations
<xref ref-type="bibr" rid="pcbi.1001112-Braun3">[53]</xref>
could perhaps be learned faster than the three-dimensional structure of transformations used in Sessions 2 & 3 in the present study, which was never learned fully. As dimensionality increases, the amount of data required by a subject to specify the structure increases dramatically.</p>
</sec>
<sec id="s3c">
<title>Extensions of the technique</title>
<p>In the current study we have made a number of simplifying assumptions which facilitated our analysis but which we believe in future studies could be relaxed. First, we have analysed the prior within the Cartesian coordinate system in which the prior is over the elements of the set of
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e124.jpg" mimetype="image"></inline-graphic>
</inline-formula>
transformation matrices. We believe this coordinate system to be a reasonable starting point for such research, since the visuomotor generalization studies cited above found visuomotor generalization to be linear
<xref ref-type="bibr" rid="pcbi.1001112-Bedford1">[37]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Bedford2">[38]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Vetter1">[41]</xref>
. In particular, the bias seems to be for rotations
<xref ref-type="bibr" rid="pcbi.1001112-Vetter1">[41]</xref>
rather than shifts in Cartesian space, which are not linear transformations; some studies describe generalization of shifts but as they either only examine a one-dimensional array of targets
<xref ref-type="bibr" rid="pcbi.1001112-Bedford1">[37]</xref>
,
<xref ref-type="bibr" rid="pcbi.1001112-Bedford2">[38]</xref>
or a single generalization target
<xref ref-type="bibr" rid="pcbi.1001112-Simani1">[56]</xref>
their results can not distinguish between rotations and shifts.</p>
<p>Furthermore, the comparison of different models in this paper (
<xref ref-type="fig" rid="pcbi-1001112-g007">Figure 7</xref>
) shows that our linear-transformations model performs better than a more complex affine-transformations model and simpler models such as the shift model. This suggests that our linear-transformations model is of the right level of complexity for explaining subjects' performance in this paradigm. That the shift model performed considerably better than the no-adaptation model does not show that subjects believed any transformations to have a shift component and that the extra complexity of the affine-transformations model is therefore necessary. Rather, the shift model may have simply managed to approximate linear transformations (such as small rotations) as shifts.</p>
<p>A further simplifying assumption was that the prior takes on a multivariate Gaussian distribution over elements of the transformation matrix. The true prior could be both nonlinear and non-Gaussian in our parameterization and as such our estimation may be an approximation to the true prior. While it may be possible to develop techniques to find a prior which has more complex structure, such as a mixture of Gaussians, such an analysis would require far more data for the extra degrees of freedom incurred by a more complex model.</p>
<p>Another model assumption is that the subject uses the MAP transformation to choose his hand position. Although it is common for Bayesian decision models to use point estimates of parameters when making decisions, different rules that also take into account the observer's uncertainty over the transformation may better model the data.</p>
<p>Our model was purely parametric, with the observer performing inference directly over the parameters of the transformation matrix. In the future it will be interesting to consider hierarchical observer models which would perform inference over
<italic>structures</italic>
of transformations, such as rotations, uniform scaling or shearings, and simultaneously over the parameters within each structure, such as the angle of the rotation. This observer would have a prior over structures and over the parameters within each structure. Nevertheless, our study shows that we can estimate the full covariance structure of a prior in a sensorimotor task, that this prior has similar form across subjects and that it can be altered by novel experience.</p>
</sec>
</sec>
<sec sec-type="methods" id="s4">
<title>Methods</title>
<sec id="s4a">
<title>Experimental methods</title>
<p>All eight subjects were naïve to the purpose of the experiments. Experiments were performed using a vBOT planar robotic manipulandum
<xref ref-type="bibr" rid="pcbi.1001112-Howard1">[57]</xref>
. Subjects used their right hand to grasp the handle, which they could move freely in the horizontal plane. A planar virtual reality projection system was used to overlay images into the plane of movement of the vBOT handle. Subjects were not able to see their arm.</p>
<sec id="s4a1">
<title>Ethics statement</title>
<p>All subjects gave written informed consent in accordance with the requirements of the Psychology Research Ethics Committee of the University of Cambridge.</p>
</sec>
<sec id="s4a2">
<title>First session</title>
<p>In the first session, subjects alternated between making reaching movements under veridical and transformed feedback (see
<xref ref-type="fig" rid="pcbi-1001112-g001">Figure 1</xref>
for a summary of the experimental design). On each trial subjects made a reach from a midline starting circle (1 cm radius,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e125.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in front of the subject's chest) to a visually presented target. To initiate a trial the hand had to be stationary within the starting circle (speed less than
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e126.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for 800 ms), at which point the visual target (2 cm radius) appeared. The target location was selected pseudorandomly from a
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e127.jpg" mimetype="image"></inline-graphic>
</inline-formula>
rectangle centred 11 cm further in front of the subject's chest than the starting location (see
<xref ref-type="fig" rid="pcbi-1001112-g002">Figure 2</xref>
). In the veridical batches, visual feedback of the final hand location (0.5 cm radius cursor) was displayed for 1 s at the end of the movement (hand speed less than
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e128.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for 300 ms). Subjects then returned their hand to the starting circle, and the cursor representing their hand was only displayed when the hand was within 1.5 cm of the centre of the starting circle. Subjects repeated trials (with a new target selected uniformly subject to its direction from the starting circle being
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e129.jpg" mimetype="image"></inline-graphic>
</inline-formula>
from the preceding target) until they managed to place the centre of the hand cursor within a target circle. They then performed a batch of transformed trials.</p>
<p>Transformed trials were the same as veridical trials except that: 1) a linear transformation was applied between the hand's final location and the displayed cursor position and this transformation was kept fixed within a batch; 2) the position of the visual target (3 cm radius) had to satisfy an added requirement not to overlap the cursor position of the preceding trial; 3) to end a batch subjects had to complete at least three trials and place the centre of the hand cursor within a target circle, and 4) starting on the eighth trial, a batch could spontaneously terminate with a probability of 0.2 after each trial.</p>
<p>For the transformed trials the cursor position (
<bold>v</bold>
) was a linear transformation (specified by transformation matrix
<bold>T</bold>
) of the final hand position (
<bold>h</bold>
) relative to the starting circle:
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e130.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. In component form, this can be written:
<disp-formula>
<graphic xlink:href="pcbi.1001112.e131"></graphic>
</disp-formula>
</p>
<p>The target color, yellow or blue, indicated whether the trial was veridical or transformed respectively. Subjects were told that on ‘blue’ trials the feedback was not of their actual hand position, but was related to their hand position by a rule. Subjects were told to attempt to learn, and compensate for, this rule in order to hit the targets, and that the rule would be constant across trials until they had hit a target and a set of ‘yellow’ trials had begun. They were told that a new rule was chosen each time a new set of blue trials started, and was unrelated to the rule of the previous set.</p>
</sec>
<sec id="s4a3">
<title>Second and third sessions</title>
<p>In the second and third sessions, subjects again alternated between making reaching movements under veridical and transformed feedback. However, in the transformed feedback batches, full-feedback trials were included in which the transformed hand cursor was continuously displayed throughout the trial, in order to speed up learning of the transformations and thus of the distribution of transformations. On these trials the batch did not terminate on reaching the target (1 cm radius) and these trials occurred randomly after the third trial with probability
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e132.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e133.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is a trial counter that starts at 1 on the fourth trial and resets to 0 after a full-feedback trial. Thus this probability rises with each consecutive terminal-feedback trial, and drops to zero on the trial after a full-feedback trial.</p>
</sec>
<sec id="s4a4">
<title>Correlated distribution of transformations</title>
<p>To sample a transformation from the correlated distribution used in sessions 2 and 3, elements
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e134.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e135.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the transformation matrix were sampled from the uniform distribution
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e136.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Elements
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e137.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e138.jpg" mimetype="image"></inline-graphic>
</inline-formula>
were set equal to each other and were sampled from a zero-mean Gaussian distribution with standard deviation
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e139.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. To ensure that the target was reachable, a proposed transformation was then rejected and resampled if it mapped the hand cursor for any hand position within the target rectangle outside the central 80% of either dimension of the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e140.jpg" mimetype="image"></inline-graphic>
</inline-formula>
screen, or if it required the hand position to be further than 30 cm from the starting circle to hit any possible target. The resulting distribution of transformations is shown in the top-right cell of
<xref ref-type="fig" rid="pcbi-1001112-g005">Figure 5</xref>
. This distribution was chosen based on pilot experiments which suggested that subjects have a prior that
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e141.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and hence setting
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e142.jpg" mimetype="image"></inline-graphic>
</inline-formula>
would differ from this prior and engender new learning.</p>
</sec>
<sec id="s4a5">
<title>Uncorrelated distribution of transformations</title>
<p>In Session 1, the transformation on the first trial was also selected from the correlated distribution. This ensured that the distribution of evidence given to the subject on the first trial was consistent across sessions. However, on the second trial of a batch a new transformation consisted with the first-trial evidence was chosen, and then used for this and all remaining trials of the batch. This new transformation is treated in our analysis as if it had been the transformation throughout the batch, since it would have generated the same evidence on the first trial as the transformation from the correlation distribution. The new transformation was chosen such that across batches there were negligible correlations between any pair of elements in the eventual transformation matrices. To achieve this, at the start of the second trial elements
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e143.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e144.jpg" mimetype="image"></inline-graphic>
</inline-formula>
were drawn from Gaussians with
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e145.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and means 1 and 0 respectively, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e146.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e147.jpg" mimetype="image"></inline-graphic>
</inline-formula>
were then uniquely specified so as to be consistent with the hand and cursor positions of the first trial. The rules for rejection of proposed transformations from the correlated distribution were also applied to the choosing of an uncorrelated transform on the second trial of a batch in Session 1; if transformations failed, more were drawn until an eligible transform consistent with the first trial evidence was found. The resulting uncorrelated distribution of the transformation matrices of the second and subsequent trials of the transformed batches of Session 1 (
<xref ref-type="fig" rid="pcbi-1001112-g005">Figure 5</xref>
, top-left cell) shows minimal correlations between the four elements of the matrix (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e148.jpg" mimetype="image"></inline-graphic>
</inline-formula>
across all pairs), while each element of the matrix has similar standard deviation to in the correlated distribution (
<xref ref-type="table" rid="pcbi-1001112-t002">Table 2</xref>
).</p>
<table-wrap id="pcbi-1001112-t002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.t002</object-id>
<label>Table 2</label>
<caption>
<title>Statistics of the two distributions of transformations.</title>
</caption>
<alternatives>
<graphic id="pcbi-1001112-t002-2" xlink:href="pcbi.1001112.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td colspan="2" align="left" rowspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e149.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e150.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e151.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e152.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Correlation in uncorrelated distribution</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e153.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">1.00</td>
<td align="left" rowspan="1" colspan="1">0.13</td>
<td align="left" rowspan="1" colspan="1">0.05</td>
<td align="left" rowspan="1" colspan="1">0.13</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e154.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">1.00</td>
<td align="left" rowspan="1" colspan="1">−0.09</td>
<td align="left" rowspan="1" colspan="1">0.03</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e155.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">1.00</td>
<td align="left" rowspan="1" colspan="1">0.01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e156.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">1.00</td>
</tr>
<tr>
<td colspan="2" align="left" rowspan="1">S.D. in uncorrelated distribution</td>
<td align="left" rowspan="1" colspan="1">0.64</td>
<td align="left" rowspan="1" colspan="1">0.62</td>
<td align="left" rowspan="1" colspan="1">0.72</td>
<td align="left" rowspan="1" colspan="1">0.53</td>
</tr>
<tr>
<td colspan="2" align="left" rowspan="1">S.D. in correlated distribution</td>
<td align="left" rowspan="1" colspan="1">0.53</td>
<td align="left" rowspan="1" colspan="1">0.54</td>
<td align="left" rowspan="1" colspan="1">0.54</td>
<td align="left" rowspan="1" colspan="1">0.41</td>
</tr>
<tr>
<td colspan="2" align="left" rowspan="1">Mean in uncorrelated distribution</td>
<td align="left" rowspan="1" colspan="1">1.12</td>
<td align="left" rowspan="1" colspan="1">0.01</td>
<td align="left" rowspan="1" colspan="1">−0.01</td>
<td align="left" rowspan="1" colspan="1">1.07</td>
</tr>
<tr>
<td colspan="2" align="left" rowspan="1">Mean in correlated distribution</td>
<td align="left" rowspan="1" colspan="1">1.17</td>
<td align="left" rowspan="1" colspan="1">0.03</td>
<td align="left" rowspan="1" colspan="1">0.03</td>
<td align="left" rowspan="1" colspan="1">0.99</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt102">
<label></label>
<p>Top: statistics of the ‘uncorrelated’ and ‘uncorrelated’ distributions, estimated from the 1130 transforms used in Session 1 and the 1091 transforms used in Session 2 respectively.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec id="s4b">
<title>Modelling</title>
<sec id="s4b1">
<title>The standard model</title>
<p>Our observer model starts each transformation batch within an experimental session with the same prior probability distribution over transformations. Over the course of each batch, it optimally combines this prior with the evidence shown to the subject, and on each trial uses the updated distribution to select its final hand position.</p>
<p>We vectorize the transformation matrix, i.e.
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e157.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, in order to model the probability distribution over transformations as a multivariate Gaussian
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e158.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This distribution on the first trial of a transformation batch is the prior,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e159.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The prior mean is the identity transform:
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e160.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Our inference problem is to the determine the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e161.jpg" mimetype="image"></inline-graphic>
</inline-formula>
prior covariance matrix
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e162.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. For mathematical simplicity, we actually performed inference on the precision matrix
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e163.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>On any transformed trial
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e164.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of a batch, the subject has access to the actual (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e165.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and transformed visual location of the hand (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e166.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Our observer can use Bayes rule to update its distribution over transformations with this new evidence:
<disp-formula>
<graphic xlink:href="pcbi.1001112.e167"></graphic>
</disp-formula>
</p>
<p>Our aim is to find the prior
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e168.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which we can replace with
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e169.jpg" mimetype="image"></inline-graphic>
</inline-formula>
since it is reasonable to assume that the subject does not believe the transformation
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e170.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to depend on the first-trial hand position. The likelihood function is:
<disp-formula>
<graphic xlink:href="pcbi.1001112.e171"></graphic>
</disp-formula>
</p>
<p>since for tractability we model the internal representation of the hand position
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e172.jpg" mimetype="image"></inline-graphic>
</inline-formula>
as noiseless, with all noise being on the transformed hand position
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e173.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(although in reality this noise consists of two components affecting both
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e174.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e175.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Thus the model observer's probability distribution over the actual
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e176.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, given the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e177.jpg" mimetype="image"></inline-graphic>
</inline-formula>
it observes, is
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e178.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e179.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This noise, actually representing both motor and visual noise, was modelled as isotropic Gaussian because a preliminary experiment with unperturbed reaching movements found the combined motor and visual noise in this paradigm to be near to isotropic.</p>
<p>We now express the likelihood function in terms of the vectorized transformation matrix (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e180.jpg" mimetype="image"></inline-graphic>
</inline-formula>
):
<disp-formula>
<graphic xlink:href="pcbi.1001112.e181"></graphic>
</disp-formula>
</p>
<p>where
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e182.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is a function of
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e183.jpg" mimetype="image"></inline-graphic>
</inline-formula>
:
<disp-formula>
<graphic xlink:href="pcbi.1001112.e184"></graphic>
</disp-formula>
</p>
<p>We multiply this Gaussian likelihood with the Gaussian distribution over transformations to give an updated distribution over transformations
<xref ref-type="bibr" rid="pcbi.1001112-Bishop1">[58]</xref>
:
<disp-formula>
<graphic xlink:href="pcbi.1001112.e185"></graphic>
</disp-formula>
</p>
<p>where
<disp-formula>
<graphic xlink:href="pcbi.1001112.e186"></graphic>
</disp-formula>
</p>
<p>The observer then takes the MAP estimate of the transformation (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e187.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and applies its inverse to the target position on the next trial
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e188.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, such that the predicted hand position is
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e189.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>It can be shown that scaling the visual noise constant,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e190.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, will simply induce the same scaling in the prior covariance
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e191.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on all trials, with no effect on the predicted hand positions on the second and subsequent trials. Since our analysis focusses on the shape rather than the absolute size of the prior covariance, we simply set
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e192.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to 1 cm
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e193.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
</sec>
<sec id="s4b2">
<title>Fitting the model</title>
<p>For a given prior covariance over the elements of the transformation matrix, the model predicts the optimal locations for the reaches on the second trial of each batch (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e194.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). As a measure of goodness-of-fit we computed a robust estimate of error between the predicted and actual hand position (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e195.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the Euclidean error on trial 2 of transformation batch
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e196.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) across the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e197.jpg" mimetype="image"></inline-graphic>
</inline-formula>
batches of a session for one subject,
<disp-formula>
<graphic xlink:href="pcbi.1001112.e198"></graphic>
</disp-formula>
</p>
<p>with
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e199.jpg" mimetype="image"></inline-graphic>
</inline-formula>
set to 10 cm. Use of this robust error measure reduces sensitivity to outliers. Our choice of
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e200.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was in order to maximize sensitivity to errors in the 4–10 cm range that was common for predictive errors for our model. We found that using different values for
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e201.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(5 and 20 cm) did not affect our main findings: significantly negative correlation coefficients between
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e202.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e203.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in Session 1 (
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e204.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on t-test and Kolmogorov-Smirnov test) that ceased to be significant in Sessions 2 and 3; and significantly negative angles of the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e205.jpg" mimetype="image"></inline-graphic>
</inline-formula>
covariance in Session 1 that then clustered around zero and ceased to be significantly negative in Sessions 2 and 3.</p>
<p>We then optimized the covariance matrix for each subject in each session to minimize the cost. We did this by optimizing the 10 free elements of the
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e206.jpg" mimetype="image"></inline-graphic>
</inline-formula>
upper triangular matrix
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e207.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e208.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This guarantees that
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e209.jpg" mimetype="image"></inline-graphic>
</inline-formula>
will be symmetric and positive semi-definite (a requirement of a precision or covariance matrix). To further constrain
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e210.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and thus its inverse
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e211.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, to be positive-definite, the diagonal elements of
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e212.jpg" mimetype="image"></inline-graphic>
</inline-formula>
were constrained to be positive. These steps do not prevent near-singular matrices being evaluated; to avoid such numerical problems,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e213.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was added to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e214.jpg" mimetype="image"></inline-graphic>
</inline-formula>
before evaluation of the cost during fitting and at the end of the fitting process.</p>
<p>A trust-region-reflective algorithm implemented by the
<bold>fmincon</bold>
function of Matlab's Optimization Toolbox was used, with fits started from random precision matrices
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e215.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where
<bold>B</bold>
is a
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e216.jpg" mimetype="image"></inline-graphic>
</inline-formula>
matrix whose elements are independently drawn from a zero mean Gaussian distribution with
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e217.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. A hundred fits were run for each session and the one with the lowest cost chosen.</p>
</sec>
<sec id="s4b3">
<title>Validating the model</title>
<p>825 simulated datasets were created by sampling random ‘generating’ priors (created in the same way as the random precision matrices used to initiate model fits) and running the model on an artificial experiment with 150 transformations chosen as for the real experiments. Zero-mean Gaussian noise of covariance
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e218.jpg" mimetype="image"></inline-graphic>
</inline-formula>
– so chosen to simulate noise from real subjects – was added to the cursor positions.</p>
<p>The model was fit to each of these datasets by taking the best of 100 fits. These best fits always gave a lower cost than did the generating prior, due to the finite sample size of the artificial data set. Since our analysis of priors concentrates on the covariance orientation angles and correlation coefficients between pairs of elements, we sought to establish that the differences between these statistics in the generating and fitted priors were small. The median absolute difference in covariance angle between the generating prior and the fitted prior was
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e219.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<xref ref-type="fig" rid="pcbi-1001112-g009">Figure 9A</xref>
), compared to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e220.jpg" mimetype="image"></inline-graphic>
</inline-formula>
when comparing two randomly-generated priors (
<xref ref-type="fig" rid="pcbi-1001112-g009">Figure 9B</xref>
). Likewise, the median absolute difference in correlation coefficient between the generating prior and the fitted prior was 0.09 (
<xref ref-type="fig" rid="pcbi-1001112-g009">Figure 9C</xref>
), compared to 0.72 for random priors (
<xref ref-type="fig" rid="pcbi-1001112-g009">Figure 9D</xref>
). The fitted correlation was of the wrong sign in 10% of cases, compared to 50% for random priors.</p>
<fig id="pcbi-1001112-g009" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1001112.g009</object-id>
<label>Figure 9</label>
<caption>
<title>Model validation.</title>
<p>(A) The distribution of the difference in covariance orientation angle between pairs of elements in the generating and fitted priors, aggregated across all six pairings of elements. (B) The corresponding distribution when random priors are compared. (C) The distribution of the absolute difference in correlation coefficient between pairs of elements in the generating and fitted priors, aggregated across all six pairings of elements. (D) The corresponding distribution when random priors are compared.</p>
</caption>
<graphic xlink:href="pcbi.1001112.g009"></graphic>
</fig>
</sec>
<sec id="s4b4">
<title>Model variations</title>
<p>The standard Bayesian observer model described above correctly assumes the cursor position to be at a linear transformation of the hand position,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e221.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Three other observer models, using the same Bayesian principle but making different assumptions about the transformation, were developed.</p>
<p>The ‘shift’ model assumes the cursor position to be at a shift of the hand position,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e222.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The mean shift in the prior
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e223.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is set at zero. The update equations for the distribution
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e224.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e225.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e226.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. To select its next hand position, the model applies the inverse of the mean shift
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e227.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to the target position, such that the predicted hand position is
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e228.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>The ‘rotation & scaling’ model assumes transformations to consist of a rotation and uniform scaling. This was implemented in polar coordinates centred on the start position, as a shift by
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e229.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the angular coordinate and a multiplication by
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e230.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the radial coordinate. This can be written as,
<disp-formula>
<graphic xlink:href="pcbi.1001112.e231"></graphic>
</disp-formula>
</p>
<p>or in vector form,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e232.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The mean transformation in the prior
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e233.jpg" mimetype="image"></inline-graphic>
</inline-formula>
has zero rotation and a scaling gain
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e234.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of unity. The update equations for the distribution
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e235.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e236.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e237.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The visual noise covariance
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e238.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was diagonal, with radial variance 1 cm
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e239.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and angular variance 0.1
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e240.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, designed to be isotropic at an eccentricity of 10 cm (as in the standard model we fix the magnitude of the variance - see above). The model selects its hand positions using the MAP transformation:
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e241.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e242.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>The ‘affine transformations’ model is the most general of all, assuming the hand position to be subject to a linear transformation and a shift,
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e243.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. As for the standard model, the transformation equation can be linearized to
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e244.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e245.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<disp-formula>
<graphic xlink:href="pcbi.1001112.e246"></graphic>
</disp-formula>
</p>
<p>The mean transformation is
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e247.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and the update equations are identical to those for the standard model. The MAP transformation
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e248.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is converted into its linear and shift parts
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e249.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e250.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, for the purpose of choosing the model hand position on the next trial:
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e251.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The
<inline-formula>
<inline-graphic xlink:href="pcbi.1001112.e252.jpg" mimetype="image"></inline-graphic>
</inline-formula>
Gaussian distribution over the parameters of the affine transformation did not have covariance between the linear and shift parameters, i.e.
<disp-formula>
<graphic xlink:href="pcbi.1001112.e253"></graphic>
</disp-formula>
</p>
<p>in order to restrict the number of free parameters to 13 (rather than a possible 21).</p>
<p>The same trust-region-reflective algorithm as for the standard model was used to fit the affine model. A slower active-set algorithm, also implemented by the fmincon function of Matlab's Optimization Toolbox, was used to fit the shift and rotation & scaling models; the choice of optimization method was not so important when fitting these models, which have fewer parameters.</p>
<p>Models were compared on the basis of errors between the predicted and actual hand positions. These predictive errors were capped at 20 cm to minimize the effect of outliers, then averaged across all transformations within an experimental session, and then across all subjects and sessions. For trials 3–7 of transformed batches, the Bayesian observer models used priors fit to the second trial of all transformation batches. For comparing prediction errors on the second trial itself, 10-fold cross-validation was used so that complex models did not benefit from over-fitting. The transformations experienced by a subject in one session were assigned into 10 non-overlapping and evenly-spaced groups. For example, if the session included 111 transformations, group 1 consisted of transformations 1, 11, 21, ..., 101, 111; group 2 consisted of transformations 2, 12, 22, ..., 92, 102, etc. Second-trial hand positions were predicted for each group using priors fit as normal to the other nine groups.</p>
</sec>
</sec>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="conflict">
<p>The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>This work was funded by the Wellcome Trust and the European SENSOPAC project (IST-2005 028056). DAB was supported by the German Academic Exchange Service (DAAD). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="pcbi.1001112-Faisal1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Faisal</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Selen</surname>
<given-names>LPJ</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Noise in the nervous system.</article-title>
<source>Nat Rev Neurosci</source>
<volume>9</volume>
<fpage>292</fpage>
<lpage>303</lpage>
<pub-id pub-id-type="pmid">18319728</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Glimcher1">
<label>2</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Glimcher</surname>
<given-names>PW</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Decisions, Uncertainty, and the Brain.</article-title>
<publisher-loc>Cambridge, (Massachusetts)</publisher-loc>
<publisher-name>MIT Press</publisher-name>
</element-citation>
</ref>
<ref id="pcbi.1001112-Helmholtz1">
<label>3</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Helmholtz</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1925</year>
<article-title>Treatise on physiological optics (1867).</article-title>
<publisher-loc>Rochester, New York</publisher-loc>
<publisher-name>Optical Society of America</publisher-name>
</element-citation>
</ref>
<ref id="pcbi.1001112-Doya1">
<label>4</label>
<element-citation publication-type="book">
<person-group person-group-type="editor">
<name>
<surname>Doya</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Bayesian Brain: Probabilistic Approaches to Neural Coding.</article-title>
<publisher-loc>Cambridge, (Massachusetts)</publisher-loc>
<publisher-name>MIT Press</publisher-name>
</element-citation>
</ref>
<ref id="pcbi.1001112-Knill1">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>The bayesian brain: the role of uncertainty in neural coding and computation.</article-title>
<source>Trends Neurosci</source>
<volume>27</volume>
<fpage>712</fpage>
<lpage>719</lpage>
<pub-id pub-id-type="pmid">15541511</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Weiss1">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weiss</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
<name>
<surname>Adelson</surname>
<given-names>EH</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Motion illusions as optimal percepts.</article-title>
<source>Nat Neurosci</source>
<volume>5</volume>
<fpage>598</fpage>
<lpage>604</lpage>
<pub-id pub-id-type="pmid">12021763</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Stocker1">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stocker</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Noise characteristics and prior expectations in human visual speed perception.</article-title>
<source>Nat Neurosci</source>
<volume>9</volume>
<fpage>578</fpage>
<lpage>585</lpage>
<pub-id pub-id-type="pmid">16547513</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Adams1">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Adams</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Graf</surname>
<given-names>EW</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Experience can change the ‘light-from-above’ prior.</article-title>
<source>Nat Neurosci</source>
<volume>7</volume>
<fpage>1057</fpage>
<lpage>1058</lpage>
<pub-id pub-id-type="pmid">15361877</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Langer1">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Langer</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>A prior for global convexity in local shape-from-shading.</article-title>
<source>Perception</source>
<volume>30</volume>
<fpage>403</fpage>
<lpage>410</lpage>
<pub-id pub-id-type="pmid">11383189</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Howe1">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Howe</surname>
<given-names>CQ</given-names>
</name>
<name>
<surname>Purves</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Range image statistics can explain the anomalous perception of length.</article-title>
<source>Proc Natl Acad Sci U S A</source>
<volume>99</volume>
<fpage>13184</fpage>
<lpage>13188</lpage>
<pub-id pub-id-type="pmid">12237401</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Flanagan1">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Flanagan</surname>
<given-names>JR</given-names>
</name>
<name>
<surname>Bowman</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>Johansson</surname>
<given-names>RS</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Control strategies in object manipulation tasks.</article-title>
<source>Curr Opin Neurobiol</source>
<volume>16</volume>
<fpage>650</fpage>
<lpage>659</lpage>
<pub-id pub-id-type="pmid">17084619</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Brayanov1">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brayanov</surname>
<given-names>JB</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>MA</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Bayesian and "anti-bayesian" biases in sensory integration for action and perception in the size-weight illusion.</article-title>
<source>J Neurophysiol</source>
<volume>103</volume>
<fpage>1518</fpage>
<lpage>1531</lpage>
<pub-id pub-id-type="pmid">20089821</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Flanagan2">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Flanagan</surname>
<given-names>JR</given-names>
</name>
<name>
<surname>Beltzner</surname>
<given-names>MA</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Independence of perceptual and sensorimotor predictions in the size-weight illusion.</article-title>
<source>Nat Neurosci</source>
<volume>3</volume>
<fpage>737</fpage>
<lpage>741</lpage>
<pub-id pub-id-type="pmid">10862708</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Kemp1">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kemp</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>JB</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Structured statistical models of inductive reasoning.</article-title>
<source>Psychol Rev</source>
<volume>116</volume>
<fpage>20</fpage>
<lpage>58</lpage>
<pub-id pub-id-type="pmid">19159147</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Tenenbaum1">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tenenbaum</surname>
<given-names>JB</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>TL</given-names>
</name>
<name>
<surname>Kemp</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Theory-based bayesian models of inductive learning and reasoning.</article-title>
<source>Trends Cogn Sci</source>
<volume>10</volume>
<fpage>309</fpage>
<lpage>318</lpage>
<pub-id pub-id-type="pmid">16797219</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Acuna1">
<label>16</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Acuna</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Schrater</surname>
<given-names>PR</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Structure learning in human sequential decision-making.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Koller</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Schuurmans</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Bottou </surname>
<given-names>L</given-names>
</name>
</person-group>
<source>Advances in Neural Information Processing Systems 21,</source>
<publisher-loc>Cambridge (Massachusetts)</publisher-loc>
<publisher-name>MIT Press</publisher-name>
<fpage>1</fpage>
<lpage>8</lpage>
</element-citation>
</ref>
<ref id="pcbi.1001112-Griffiths1">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Griffiths</surname>
<given-names>TL</given-names>
</name>
<name>
<surname>Kalish</surname>
<given-names>ML</given-names>
</name>
<name>
<surname>Lewandowsky</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Theoretical and empirical evidence for the impact of inductive biases on cultural evolution.</article-title>
<source>Philos Trans R Soc Lond B Biol Sci</source>
<volume>363</volume>
<fpage>3503</fpage>
<lpage>3514</lpage>
<pub-id pub-id-type="pmid">18801717</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Sanborn1">
<label>18</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Sanborn</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Markov chain monte carlo with people.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Platt</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Koller</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Singer</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Roweis </surname>
<given-names>S</given-names>
</name>
</person-group>
<source>Advances in Neural Information Processing Systems 20,</source>
<publisher-loc>Cambridge, (Massachusetts)</publisher-loc>
<publisher-name>MIT Press</publisher-name>
<fpage>1265</fpage>
<lpage>1272</lpage>
</element-citation>
</ref>
<ref id="pcbi.1001112-Krding1">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Bayesian integration in sensorimotor learning.</article-title>
<source>Nature</source>
<volume>427</volume>
<fpage>244</fpage>
<lpage>247</lpage>
<pub-id pub-id-type="pmid">14724638</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Krding2">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Bayesian decision theory in sensorimotor control.</article-title>
<source>Trends Cogn Sci</source>
<volume>10</volume>
<fpage>319</fpage>
<lpage>326</lpage>
<pub-id pub-id-type="pmid">16807063</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-vanBeers1">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Beers</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Sittig</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>van der Gon</surname>
<given-names>JJD</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Integration of proprioceptive and visual position-information: An experimentally supported model.</article-title>
<source>J Neurophysiol</source>
<volume>81</volume>
<fpage>1355</fpage>
<lpage>1364</lpage>
<pub-id pub-id-type="pmid">10085361</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Ernst1">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion.</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Girshick1">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Girshick</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Probabilistic combination of slant information: weighted averaging and robustness as optimal percepts.</article-title>
<source>J Vis</source>
<volume>9</volume>
<fpage>8.1</fpage>
<lpage>820</lpage>
<pub-id pub-id-type="pmid">19761341</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Todorov1">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Todorov</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Jordan</surname>
<given-names>MI</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Optimal feedback control as a theory of motor coordination.</article-title>
<source>Nat Neurosci</source>
<volume>5</volume>
<fpage>1226</fpage>
<lpage>1235</lpage>
<pub-id pub-id-type="pmid">12404008</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Todorov2">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Todorov</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Optimality principles in sensorimotor control.</article-title>
<source>Nat Neurosci</source>
<volume>7</volume>
<fpage>907</fpage>
<lpage>915</lpage>
<pub-id pub-id-type="pmid">15332089</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Scott1">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scott</surname>
<given-names>SH</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Optimal feedback control and the neural basis of volitional motor control.</article-title>
<source>Nat Rev Neurosci</source>
<volume>5</volume>
<fpage>532</fpage>
<lpage>546</lpage>
<pub-id pub-id-type="pmid">15208695</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Diedrichsen1">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Diedrichsen</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Shadmehr</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Ivry</surname>
<given-names>RB</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>The coordination of movement: optimal feedback control and beyond.</article-title>
<source>Trends Cogn Sci</source>
<volume>14</volume>
<fpage>31</fpage>
<lpage>39</lpage>
<pub-id pub-id-type="pmid">20005767</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Diedrichsen2">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Diedrichsen</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Optimal task-dependent changes of bimanual feedback control and adaptation.</article-title>
<source>Curr Biol</source>
<volume>17</volume>
<fpage>1675</fpage>
<lpage>1679</lpage>
<pub-id pub-id-type="pmid">17900901</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Braun1">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Braun</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Ortega</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Nash equilibria in multi-agent motor interactions.</article-title>
<source>PLoS Comput Biol</source>
<volume>5</volume>
<fpage>e1000468</fpage>
<pub-id pub-id-type="pmid">19680426</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Izawa1">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Izawa</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Rane</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Donchin</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Shadmehr</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Motor adaptation as a process of reoptimization.</article-title>
<source>J Neurosci</source>
<volume>28</volume>
<fpage>2883</fpage>
<lpage>2891</lpage>
<pub-id pub-id-type="pmid">18337419</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-ChenHarris1">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen-Harris</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Joiner</surname>
<given-names>WM</given-names>
</name>
<name>
<surname>Ethier</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Zee</surname>
<given-names>DS</given-names>
</name>
<name>
<surname>Shadmehr</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Adaptive control of saccades via internal feedback.</article-title>
<source>J Neurosci</source>
<volume>28</volume>
<fpage>2804</fpage>
<lpage>2813</lpage>
<pub-id pub-id-type="pmid">18337410</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Braun2">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Braun</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Aertsen</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Mehring</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Learning optimal adaptation strategies in unpredictable motor tasks.</article-title>
<source>J Neurosci</source>
<volume>29</volume>
<fpage>6472</fpage>
<lpage>6478</lpage>
<pub-id pub-id-type="pmid">19458218</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Nagengast1">
<label>33</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nagengast</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Braun</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Optimal control predicts human performance on objects with internal degrees of freedom.</article-title>
<source>PLoS Comput Biol</source>
<volume>5</volume>
<fpage>e1000419</fpage>
<pub-id pub-id-type="pmid">19557193</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Zemel1">
<label>34</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zemel</surname>
<given-names>RS</given-names>
</name>
<name>
<surname>Dayan</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Probabilistic interpretation of population codes.</article-title>
<source>Neural Comput</source>
<volume>10</volume>
<fpage>403</fpage>
<lpage>430</lpage>
<pub-id pub-id-type="pmid">9472488</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Beck1">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beck</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Kiani</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Hanks</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Churchland</surname>
<given-names>AK</given-names>
</name>
<etal></etal>
</person-group>
<year>2008</year>
<article-title>Probabilistic population codes for bayesian decision making.</article-title>
<source>Neuron</source>
<volume>60</volume>
<fpage>1142</fpage>
<lpage>1152</lpage>
<pub-id pub-id-type="pmid">19109917</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Ma1">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Bayesian inference with probabilistic population codes.</article-title>
<source>Nat Neurosci</source>
<volume>9</volume>
<fpage>1432</fpage>
<lpage>1438</lpage>
<pub-id pub-id-type="pmid">17057707</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Bedford1">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bedford</surname>
<given-names>FL</given-names>
</name>
</person-group>
<year>1989</year>
<article-title>Constraints on learning new mappings between perceptual dimensions.</article-title>
<source>J Exp Psychol: Human Perc Perf</source>
<volume>15: 2</volume>
<fpage>232</fpage>
<lpage>248</lpage>
</element-citation>
</ref>
<ref id="pcbi.1001112-Bedford2">
<label>38</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bedford</surname>
<given-names>FL</given-names>
</name>
</person-group>
<year>1993</year>
<article-title>Perceptual learning.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Medin</surname>
<given-names>D</given-names>
</name>
</person-group>
<source>The Psychology of Learning and Motivation</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>Academic Press</publisher-name>
<publisher-name>pp</publisher-name>
<fpage>1</fpage>
<lpage>60</lpage>
<comment>volume 30</comment>
</element-citation>
</ref>
<ref id="pcbi.1001112-Baily1">
<label>39</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baily</surname>
<given-names>JS</given-names>
</name>
</person-group>
<year>1972</year>
<article-title>Adaptation to prisms: do proprioceptive changes mediate adapted behaviour with ballistic arm movements?</article-title>
<source>Q J Exp Psychol</source>
<volume>24</volume>
<fpage>8</fpage>
<lpage>20</lpage>
<pub-id pub-id-type="pmid">5017507</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Welch1">
<label>40</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Welch</surname>
<given-names>RB</given-names>
</name>
</person-group>
<year>1986</year>
<article-title>Adaptation to space perception.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Boff </surname>
<given-names>KR</given-names>
</name>
<name>
<surname>Kaufman</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Thomas</surname>
<given-names>JP</given-names>
</name>
</person-group>
<source>Handbook of perception and performance</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>Wiley–Interscience</publisher-name>
<fpage>24</fpage>
<lpage>1–24-45</lpage>
<comment>volume 1</comment>
</element-citation>
</ref>
<ref id="pcbi.1001112-Vetter1">
<label>41</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vetter</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Goodbody</surname>
<given-names>SJ</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Evidence for an eye-centered spherical representation of the visuomotor map.</article-title>
<source>J Neurophysiol</source>
<volume>81</volume>
<fpage>935</fpage>
<lpage>939</lpage>
<pub-id pub-id-type="pmid">10036291</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Wigmore1">
<label>42</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wigmore</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Tong</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Flanagan</surname>
<given-names>JR</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Visuomotor rotations of varying size and direction compete for a single internal model in motor working memory.</article-title>
<source>J Exp Psychol Hum Percept Perform</source>
<volume>28</volume>
<fpage>447</fpage>
<lpage>457</lpage>
<pub-id pub-id-type="pmid">11999865</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Miall1">
<label>43</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miall</surname>
<given-names>RC</given-names>
</name>
<name>
<surname>Jenkinson</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Kulkarni</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Adaptation to rotated visual feedback: a re-examination of motor interference.</article-title>
<source>Exp Brain Res</source>
<volume>154</volume>
<fpage>201</fpage>
<lpage>210</lpage>
<pub-id pub-id-type="pmid">14608451</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Krakauer1">
<label>44</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Krakauer</surname>
<given-names>JW</given-names>
</name>
<name>
<surname>Ghez</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Ghilardi</surname>
<given-names>MF</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Adaptation to visuomotor transformations: consolidation, interference, and forgetting.</article-title>
<source>J Neurosci</source>
<volume>25</volume>
<fpage>473</fpage>
<lpage>478</lpage>
<pub-id pub-id-type="pmid">15647491</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-BrashersKrug1">
<label>45</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brashers-Krug</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Shadmehr</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Bizzi</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Consolidation in human motor memory.</article-title>
<source>Nature</source>
<volume>382</volume>
<fpage>252</fpage>
<lpage>255</lpage>
<pub-id pub-id-type="pmid">8717039</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Shadmehr1">
<label>46</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shadmehr</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Brashers-Krug</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>Functional stages in the formation of human long-term motor memory.</article-title>
<source>J Neurosci</source>
<volume>17</volume>
<fpage>409</fpage>
<lpage>419</lpage>
<pub-id pub-id-type="pmid">8987766</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Weiner1">
<label>47</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weiner</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Hallett</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Funkenstein</surname>
<given-names>HH</given-names>
</name>
</person-group>
<year>1983</year>
<article-title>Adaptation to lateral displacement of vision in patients with lesions of the central nervous system.</article-title>
<source>Neurology</source>
<volume>33</volume>
<fpage>766</fpage>
<lpage>772</lpage>
<pub-id pub-id-type="pmid">6682520</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Redding1">
<label>48</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Redding</surname>
<given-names>GM</given-names>
</name>
<name>
<surname>Wallace</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Strategic calibration and spatial alignment: a model from prism adaptation.</article-title>
<source>J Mot Behav</source>
<volume>34</volume>
<fpage>126</fpage>
<lpage>138</lpage>
<pub-id pub-id-type="pmid">12057886</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Redding2">
<label>49</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Redding</surname>
<given-names>GM</given-names>
</name>
<name>
<surname>Wallace</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Generalization of prism adaptation.</article-title>
<source>J Exp Psychol Hum Percept Perform</source>
<volume>32</volume>
<fpage>1006</fpage>
<lpage>1022</lpage>
<pub-id pub-id-type="pmid">16846294</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Redding3">
<label>50</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Redding</surname>
<given-names>GM</given-names>
</name>
<name>
<surname>Wallace</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Prism adaptation and unilateral neglect: review and analysis.</article-title>
<source>Neuropsychologia</source>
<volume>44</volume>
<fpage>1</fpage>
<lpage>20</lpage>
<pub-id pub-id-type="pmid">15907951</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Ghahramani1">
<label>51</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ghahramani</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Jordan</surname>
<given-names>MI</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Generalization to local remappings of the visuomotor coordinate transformation.</article-title>
<source>J Neurosci</source>
<volume>16</volume>
<issue>21</issue>
<fpage>7085</fpage>
<lpage>7096</lpage>
<pub-id pub-id-type="pmid">8824344</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Krakauer2">
<label>52</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Krakauer</surname>
<given-names>JW</given-names>
</name>
<name>
<surname>Pine</surname>
<given-names>ZM</given-names>
</name>
<name>
<surname>Ghilardi</surname>
<given-names>MF</given-names>
</name>
<name>
<surname>Ghez</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Learning of visuomotor transformations for vectorial planning of reaching trajectories.</article-title>
<source>J Neurosci</source>
<volume>20</volume>
<fpage>8916</fpage>
<lpage>8924</lpage>
<pub-id pub-id-type="pmid">11102502</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Braun3">
<label>53</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Braun</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Aertsen</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Mehring</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Motor task variation induces structural learning.</article-title>
<source>Curr Biol</source>
<volume>19</volume>
<fpage>352</fpage>
<lpage>357</lpage>
<pub-id pub-id-type="pmid">19217296</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Carpenter1">
<label>54</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carpenter</surname>
<given-names>RH</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>ML</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Neural computation of log likelihood in control of saccadic eye movements.</article-title>
<source>Nature</source>
<volume>377</volume>
<fpage>59</fpage>
<lpage>62</lpage>
<pub-id pub-id-type="pmid">7659161</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Flanagan3">
<label>55</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Flanagan</surname>
<given-names>JR</given-names>
</name>
<name>
<surname>Bittner</surname>
<given-names>JP</given-names>
</name>
<name>
<surname>Johansson</surname>
<given-names>RS</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Experience can change distinct size-weight priors engaged in lifting objects and judging their weights.</article-title>
<source>Curr Biol</source>
<volume>18</volume>
<fpage>1742</fpage>
<lpage>1747</lpage>
<pub-id pub-id-type="pmid">19026545</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Simani1">
<label>56</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Simani</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>McGuire</surname>
<given-names>LMM</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Visual-shift adaptation is composed of separable sensory and task-dependent effects.</article-title>
<source>J Neurophysiol</source>
<volume>98</volume>
<fpage>2827</fpage>
<lpage>2841</lpage>
<pub-id pub-id-type="pmid">17728389</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Howard1">
<label>57</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Howard</surname>
<given-names>IS</given-names>
</name>
<name>
<surname>Ingram</surname>
<given-names>JN</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>A modular planar robotic manipulandum with end-point torque control.</article-title>
<source>J Neurosci Methods</source>
<volume>181</volume>
<fpage>199</fpage>
<lpage>211</lpage>
<pub-id pub-id-type="pmid">19450621</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1001112-Bishop1">
<label>58</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bishop</surname>
<given-names>CM</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Pattern Recognition and Machine Learning.</article-title>
<publisher-name>Springer-Verlag</publisher-name>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Braun, Daniel A" sort="Braun, Daniel A" uniqKey="Braun D" first="Daniel A." last="Braun">Daniel A. Braun</name>
<name sortKey="Turnham, Edward J A" sort="Turnham, Edward J A" uniqKey="Turnham E" first="Edward J. A." last="Turnham">Edward J. A. Turnham</name>
<name sortKey="Wolpert, Daniel M" sort="Wolpert, Daniel M" uniqKey="Wolpert D" first="Daniel M." last="Wolpert">Daniel M. Wolpert</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001B11 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001B11 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:3068921
   |texte=   Inferring Visuomotor Priors for Sensorimotor Learning
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:21483475" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024