Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A learning–based approach to artificial sensory feedback leads to optimal integration

Identifieur interne : 003471 ( Ncbi/Merge ); précédent : 003470; suivant : 003472

A learning–based approach to artificial sensory feedback leads to optimal integration

Auteurs : Maria C. Dadarlat [États-Unis] ; Joseph E. O Oherty ; Philip N. Sabes [États-Unis]

Source :

RBID : PMC:4282864

Abstract

Proprioception—the sense of the body’s position in space—plays an important role in natural movement planning and execution and will likewise be necessary for successful motor prostheses and Brain–Machine Interfaces (BMIs). Here, we demonstrated that monkeys could learn to use an initially unfamiliar multi–channel intracortical microstimulation (ICMS) signal, which provided continuous information about hand position relative to an unseen target, to complete accurate reaches. Furthermore, monkeys combined this artificial signal with vision to form an optimal, minimum–variance estimate of relative hand position. These results demonstrate that a learning–based approach can be used to provide a rich artificial sensory feedback signal, suggesting a new strategy for restoring proprioception to patients using BMIs as well as a powerful new tool for studying the adaptive mechanisms of sensory integration.


Url:
DOI: 10.1038/nn.3883
PubMed: 25420067
PubMed Central: 4282864

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4282864

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">A learning–based approach to artificial sensory feedback leads to optimal integration</title>
<author>
<name sortKey="Dadarlat, Maria C" sort="Dadarlat, Maria C" uniqKey="Dadarlat M" first="Maria C." last="Dadarlat">Maria C. Dadarlat</name>
<affiliation>
<nlm:aff id="A1">Department of Physiology, Center for Integrative Neuroscience, and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</nlm:aff>
<wicri:noCountry code="subfield">and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</wicri:noCountry>
</affiliation>
<affiliation wicri:level="3">
<nlm:aff id="A2">UC Berkeley–UCSF Joint Graduate Program in Bioengineering University of California, San Francisco</nlm:aff>
<country>États-Unis</country>
<placeName>
<settlement type="city">San Francisco</settlement>
<region type="state">Californie</region>
</placeName>
<wicri:orgArea>UC Berkeley–UCSF Joint Graduate Program in Bioengineering University of California</wicri:orgArea>
</affiliation>
</author>
<author>
<name sortKey="O Oherty, Joseph E" sort="O Oherty, Joseph E" uniqKey="O Oherty J" first="Joseph E." last="O Oherty">Joseph E. O Oherty</name>
<affiliation>
<nlm:aff id="A1">Department of Physiology, Center for Integrative Neuroscience, and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</nlm:aff>
<wicri:noCountry code="subfield">and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Sabes, Philip N" sort="Sabes, Philip N" uniqKey="Sabes P" first="Philip N." last="Sabes">Philip N. Sabes</name>
<affiliation>
<nlm:aff id="A1">Department of Physiology, Center for Integrative Neuroscience, and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</nlm:aff>
<wicri:noCountry code="subfield">and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</wicri:noCountry>
</affiliation>
<affiliation wicri:level="3">
<nlm:aff id="A2">UC Berkeley–UCSF Joint Graduate Program in Bioengineering University of California, San Francisco</nlm:aff>
<country>États-Unis</country>
<placeName>
<settlement type="city">San Francisco</settlement>
<region type="state">Californie</region>
</placeName>
<wicri:orgArea>UC Berkeley–UCSF Joint Graduate Program in Bioengineering University of California</wicri:orgArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25420067</idno>
<idno type="pmc">4282864</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4282864</idno>
<idno type="RBID">PMC:4282864</idno>
<idno type="doi">10.1038/nn.3883</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">002560</idno>
<idno type="wicri:Area/Pmc/Curation">002560</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000E55</idno>
<idno type="wicri:Area/Ncbi/Merge">003471</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">A learning–based approach to artificial sensory feedback leads to optimal integration</title>
<author>
<name sortKey="Dadarlat, Maria C" sort="Dadarlat, Maria C" uniqKey="Dadarlat M" first="Maria C." last="Dadarlat">Maria C. Dadarlat</name>
<affiliation>
<nlm:aff id="A1">Department of Physiology, Center for Integrative Neuroscience, and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</nlm:aff>
<wicri:noCountry code="subfield">and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</wicri:noCountry>
</affiliation>
<affiliation wicri:level="3">
<nlm:aff id="A2">UC Berkeley–UCSF Joint Graduate Program in Bioengineering University of California, San Francisco</nlm:aff>
<country>États-Unis</country>
<placeName>
<settlement type="city">San Francisco</settlement>
<region type="state">Californie</region>
</placeName>
<wicri:orgArea>UC Berkeley–UCSF Joint Graduate Program in Bioengineering University of California</wicri:orgArea>
</affiliation>
</author>
<author>
<name sortKey="O Oherty, Joseph E" sort="O Oherty, Joseph E" uniqKey="O Oherty J" first="Joseph E." last="O Oherty">Joseph E. O Oherty</name>
<affiliation>
<nlm:aff id="A1">Department of Physiology, Center for Integrative Neuroscience, and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</nlm:aff>
<wicri:noCountry code="subfield">and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Sabes, Philip N" sort="Sabes, Philip N" uniqKey="Sabes P" first="Philip N." last="Sabes">Philip N. Sabes</name>
<affiliation>
<nlm:aff id="A1">Department of Physiology, Center for Integrative Neuroscience, and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</nlm:aff>
<wicri:noCountry code="subfield">and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</wicri:noCountry>
</affiliation>
<affiliation wicri:level="3">
<nlm:aff id="A2">UC Berkeley–UCSF Joint Graduate Program in Bioengineering University of California, San Francisco</nlm:aff>
<country>États-Unis</country>
<placeName>
<settlement type="city">San Francisco</settlement>
<region type="state">Californie</region>
</placeName>
<wicri:orgArea>UC Berkeley–UCSF Joint Graduate Program in Bioengineering University of California</wicri:orgArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Nature neuroscience</title>
<idno type="ISSN">1097-6256</idno>
<idno type="eISSN">1546-1726</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p id="P1">Proprioception—the sense of the body’s position in space—plays an important role in natural movement planning and execution and will likewise be necessary for successful motor prostheses and Brain–Machine Interfaces (BMIs). Here, we demonstrated that monkeys could learn to use an initially unfamiliar multi–channel intracortical microstimulation (ICMS) signal, which provided continuous information about hand position relative to an unseen target, to complete accurate reaches. Furthermore, monkeys combined this artificial signal with vision to form an optimal, minimum–variance estimate of relative hand position. These results demonstrate that a learning–based approach can be used to provide a rich artificial sensory feedback signal, suggesting a new strategy for restoring proprioception to patients using BMIs as well as a powerful new tool for studying the adaptive mechanisms of sensory integration.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Sober, Sj" uniqKey="Sober S">SJ Sober</name>
</author>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sober, Sj" uniqKey="Sober S">SJ Sober</name>
</author>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, Rj" uniqKey="Van Beers R">RJ van Beers</name>
</author>
<author>
<name sortKey="Sittig, Ac" uniqKey="Sittig A">AC Sittig</name>
</author>
<author>
<name sortKey="Gon, Jj" uniqKey="Gon J">JJ Gon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morgan, Ml" uniqKey="Morgan M">ML Morgan</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcguire, Lm" uniqKey="Mcguire L">LM McGuire</name>
</author>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sainburg, Rl" uniqKey="Sainburg R">RL Sainburg</name>
</author>
<author>
<name sortKey="Poizner, H" uniqKey="Poizner H">H Poizner</name>
</author>
<author>
<name sortKey="Ghez, C" uniqKey="Ghez C">C Ghez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sainburg, Rl" uniqKey="Sainburg R">RL Sainburg</name>
</author>
<author>
<name sortKey="Ghilardi, Mf" uniqKey="Ghilardi M">MF Ghilardi</name>
</author>
<author>
<name sortKey="Poizner, H" uniqKey="Poizner H">H Poizner</name>
</author>
<author>
<name sortKey="Ghez, C" uniqKey="Ghez C">C Ghez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Suminski, Aj" uniqKey="Suminski A">AJ Suminski</name>
</author>
<author>
<name sortKey="Tkach, Dc" uniqKey="Tkach D">DC Tkach</name>
</author>
<author>
<name sortKey="Fagg, Ah" uniqKey="Fagg A">AH Fagg</name>
</author>
<author>
<name sortKey="Hatsopoulos, Ng" uniqKey="Hatsopoulos N">NG Hatsopoulos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fagg, Ah" uniqKey="Fagg A">AH Fagg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Choi, Js" uniqKey="Choi J">JS Choi</name>
</author>
<author>
<name sortKey="Distasio, Mm" uniqKey="Distasio M">MM DiStasio</name>
</author>
<author>
<name sortKey="Brockmeier, Aj" uniqKey="Brockmeier A">AJ Brockmeier</name>
</author>
<author>
<name sortKey="Francis, Jt" uniqKey="Francis J">JT Francis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Daly, J" uniqKey="Daly J">J Daly</name>
</author>
<author>
<name sortKey="Liu, J" uniqKey="Liu J">J Liu</name>
</author>
<author>
<name sortKey="Aghagolzadeh, M" uniqKey="Aghagolzadeh M">M Aghagolzadeh</name>
</author>
<author>
<name sortKey="Oweiss, K" uniqKey="Oweiss K">K Oweiss</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weber, Dj" uniqKey="Weber D">DJ Weber</name>
</author>
<author>
<name sortKey="Friesen, R" uniqKey="Friesen R">R Friesen</name>
</author>
<author>
<name sortKey="Miller, Le" uniqKey="Miller L">LE Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tabot, Ga" uniqKey="Tabot G">GA Tabot</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Held, R" uniqKey="Held R">R Held</name>
</author>
<author>
<name sortKey="Hein, A" uniqKey="Hein A">a Hein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xu, J" uniqKey="Xu J">J Xu</name>
</author>
<author>
<name sortKey="Yu, L" uniqKey="Yu L">L Yu</name>
</author>
<author>
<name sortKey="Rowland, Ba" uniqKey="Rowland B">Ba Rowland</name>
</author>
<author>
<name sortKey="Stanford, Tr" uniqKey="Stanford T">TR Stanford</name>
</author>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burge, J" uniqKey="Burge J">J Burge</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Simani, Mc" uniqKey="Simani M">MC Simani</name>
</author>
<author>
<name sortKey="Mcguire, Lm" uniqKey="Mcguire L">LM McGuire</name>
</author>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zaidel, A" uniqKey="Zaidel A">A Zaidel</name>
</author>
<author>
<name sortKey="Turner, Ah" uniqKey="Turner A">AH Turner</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Makin, Jg" uniqKey="Makin J">JG Makin</name>
</author>
<author>
<name sortKey="Fellows, Mr" uniqKey="Fellows M">MR Fellows</name>
</author>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kalaska, Jf" uniqKey="Kalaska J">JF Kalaska</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kalaska, Jf" uniqKey="Kalaska J">JF Kalaska</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Block, Hj" uniqKey="Block H">HJ Block</name>
</author>
<author>
<name sortKey="Bastian, Aj" uniqKey="Bastian A">AJ Bastian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cheng, S" uniqKey="Cheng S">S Cheng</name>
</author>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Izawa, J" uniqKey="Izawa J">J Izawa</name>
</author>
<author>
<name sortKey="Shadmehr, R" uniqKey="Shadmehr R">R Shadmehr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kalaska, Jf" uniqKey="Kalaska J">JF Kalaska</name>
</author>
<author>
<name sortKey="Cohen, Da" uniqKey="Cohen D">DA Cohen</name>
</author>
<author>
<name sortKey="Prud Omme, M" uniqKey="Prud Omme M">M Prud’homme</name>
</author>
<author>
<name sortKey="Hyde, Ml" uniqKey="Hyde M">ML Hyde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Batista, Ap" uniqKey="Batista A">AP Batista</name>
</author>
<author>
<name sortKey="Buneo, Ca" uniqKey="Buneo C">CA Buneo</name>
</author>
<author>
<name sortKey="Snyder, Lh" uniqKey="Snyder L">LH Snyder</name>
</author>
<author>
<name sortKey="Andersen, Ra" uniqKey="Andersen R">RA Andersen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia Ayer, A" uniqKey="Battaglia Ayer A">A Battaglia–Mayer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Graziano, Ms" uniqKey="Graziano M">MS Graziano</name>
</author>
<author>
<name sortKey="Cooke, Df" uniqKey="Cooke D">DF Cooke</name>
</author>
<author>
<name sortKey="Taylor, Cs" uniqKey="Taylor C">CS Taylor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bremner, Lr" uniqKey="Bremner L">LR Bremner</name>
</author>
<author>
<name sortKey="Andersen, Ra" uniqKey="Andersen R">RA Andersen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deneve, S" uniqKey="Deneve S">S Deneve</name>
</author>
<author>
<name sortKey="Latham, Pe" uniqKey="Latham P">PE Latham</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Beck, Jm" uniqKey="Beck J">JM Beck</name>
</author>
<author>
<name sortKey="Latham, Pe" uniqKey="Latham P">PE Latham</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chang, Sw" uniqKey="Chang S">SW Chang</name>
</author>
<author>
<name sortKey="Snyder, Lh" uniqKey="Snyder L">LH Snyder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marzocchi, N" uniqKey="Marzocchi N">N Marzocchi</name>
</author>
<author>
<name sortKey="Breveglieri, R" uniqKey="Breveglieri R">R Breveglieri</name>
</author>
<author>
<name sortKey="Galletti, C" uniqKey="Galletti C">C Galletti</name>
</author>
<author>
<name sortKey="Fattori, P" uniqKey="Fattori P">P Fattori</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcguire, Lm" uniqKey="Mcguire L">LM McGuire</name>
</author>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wise, Sp" uniqKey="Wise S">SP Wise</name>
</author>
<author>
<name sortKey="Boussaoud, D" uniqKey="Boussaoud D">D Boussaoud</name>
</author>
<author>
<name sortKey="Johnson, Pb" uniqKey="Johnson P">PB Johnson</name>
</author>
<author>
<name sortKey="Caminiti, R" uniqKey="Caminiti R">R Caminiti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Andersen, Ra" uniqKey="Andersen R">RA Andersen</name>
</author>
<author>
<name sortKey="Buneo, Ca" uniqKey="Buneo C">CA Buneo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia Ayer, A" uniqKey="Battaglia Ayer A">A Battaglia–Mayer</name>
</author>
<author>
<name sortKey="Caminiti, R" uniqKey="Caminiti R">R Caminiti</name>
</author>
<author>
<name sortKey="Lacquaniti, F" uniqKey="Lacquaniti F">F Lacquaniti</name>
</author>
<author>
<name sortKey="Zago, M" uniqKey="Zago M">M Zago</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sabes, Pn" uniqKey="Sabes P">PN Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yttri, Ea" uniqKey="Yttri E">EA Yttri</name>
</author>
<author>
<name sortKey="Liu, Y" uniqKey="Liu Y">Y Liu</name>
</author>
<author>
<name sortKey="Snyder, Lh" uniqKey="Snyder L">LH Snyder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levy, I" uniqKey="Levy I">I Levy</name>
</author>
<author>
<name sortKey="Schluppeck, D" uniqKey="Schluppeck D">D Schluppeck</name>
</author>
<author>
<name sortKey="Heeger, Dj" uniqKey="Heeger D">DJ Heeger</name>
</author>
<author>
<name sortKey="Glimcher, Pw" uniqKey="Glimcher P">PW Glimcher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pearson, Rc" uniqKey="Pearson R">RC Pearson</name>
</author>
<author>
<name sortKey="Powell, Tp" uniqKey="Powell T">TP Powell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, Jw" uniqKey="Lewis J">JW Lewis</name>
</author>
<author>
<name sortKey="Van Essen, Dc" uniqKey="Van Essen D">DC Van Essen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Redding, Gm" uniqKey="Redding G">GM Redding</name>
</author>
<author>
<name sortKey="Wallace, B" uniqKey="Wallace B">B Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, Rj" uniqKey="Van Beers R">RJ van Beers</name>
</author>
<author>
<name sortKey="Sittig, Ac" uniqKey="Sittig A">aC Sittig</name>
</author>
<author>
<name sortKey="Denier Van Der Gon, Jj" uniqKey="Denier Van Der Gon J">JJ Denier van der Gon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ghez, C" uniqKey="Ghez C">C Ghez</name>
</author>
<author>
<name sortKey="Sainburg, R" uniqKey="Sainburg R">R Sainburg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Novak, Ke" uniqKey="Novak K">KE Novak</name>
</author>
<author>
<name sortKey="Miller, Le" uniqKey="Miller L">LE Miller</name>
</author>
<author>
<name sortKey="Houk, Jc" uniqKey="Houk J">JC Houk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Efron, Bt" uniqKey="Efron B">BT Efron</name>
</author>
<author>
<name sortKey="Rj" uniqKey="Rj">RJ</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<pmc-dir>properties manuscript</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-journal-id">9809671</journal-id>
<journal-id journal-id-type="pubmed-jr-id">21092</journal-id>
<journal-id journal-id-type="nlm-ta">Nat Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Nat. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Nature neuroscience</journal-title>
</journal-title-group>
<issn pub-type="ppub">1097-6256</issn>
<issn pub-type="epub">1546-1726</issn>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25420067</article-id>
<article-id pub-id-type="pmc">4282864</article-id>
<article-id pub-id-type="doi">10.1038/nn.3883</article-id>
<article-id pub-id-type="manuscript">NIHMS639101</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A learning–based approach to artificial sensory feedback leads to optimal integration</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Dadarlat</surname>
<given-names>Maria C.</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
<xref ref-type="aff" rid="A2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>O’Doherty</surname>
<given-names>Joseph E.</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sabes</surname>
<given-names>Philip N.</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
<xref ref-type="aff" rid="A2">2</xref>
</contrib>
</contrib-group>
<aff id="A1">
<label>1</label>
Department of Physiology, Center for Integrative Neuroscience, and UC Berkeley–UCSF Center for Neural Engineering and Prosthetics</aff>
<aff id="A2">
<label>2</label>
UC Berkeley–UCSF Joint Graduate Program in Bioengineering University of California, San Francisco</aff>
<pub-date pub-type="nihms-submitted">
<day>1</day>
<month>12</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>24</day>
<month>11</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="ppub">
<month>1</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>01</day>
<month>7</month>
<year>2015</year>
</pub-date>
<volume>18</volume>
<issue>1</issue>
<fpage>138</fpage>
<lpage>144</lpage>
<pmc-comment>elocation-id from pubmed: 10.1038/nn.3883</pmc-comment>
<permissions>
<license xlink:href="http://www.nature.com/authors/editorial_policies/license.html#terms">
<license-p>Users may view, print, copy, and download text and data-mine the content in such documents, for the purposes of academic research, subject always to the full Conditions of use:
<ext-link ext-link-type="uri" xlink:href="http://www.nature.com/authors/editorial_policies/license.html#terms">http://www.nature.com/authors/editorial_policies/license.html#terms</ext-link>
</license-p>
</license>
</permissions>
<abstract>
<p id="P1">Proprioception—the sense of the body’s position in space—plays an important role in natural movement planning and execution and will likewise be necessary for successful motor prostheses and Brain–Machine Interfaces (BMIs). Here, we demonstrated that monkeys could learn to use an initially unfamiliar multi–channel intracortical microstimulation (ICMS) signal, which provided continuous information about hand position relative to an unseen target, to complete accurate reaches. Furthermore, monkeys combined this artificial signal with vision to form an optimal, minimum–variance estimate of relative hand position. These results demonstrate that a learning–based approach can be used to provide a rich artificial sensory feedback signal, suggesting a new strategy for restoring proprioception to patients using BMIs as well as a powerful new tool for studying the adaptive mechanisms of sensory integration.</p>
</abstract>
</article-meta>
</front>
<body>
<p id="P2">Humans plan and execute movements under the guidance of both vision and proprioception
<sup>
<xref rid="R1" ref-type="bibr">1</xref>
,
<xref rid="R2" ref-type="bibr">2</xref>
</sup>
. In particular, maximally precise movements are achieved by combining estimates of limb or target position from multiple sensory modalities, weighing each by its relative reliability
<sup>
<xref rid="R3" ref-type="bibr">3</xref>
<xref rid="R6" ref-type="bibr">6</xref>
</sup>
. Furthermore, in the absence of proprioception, even simple multi–joint movements become uncoordinated
<sup>
<xref rid="R7" ref-type="bibr">7</xref>
,
<xref rid="R8" ref-type="bibr">8</xref>
</sup>
. Therefore, we should not expect current brain–machine interfaces (BMIs), which rely on visual feedback alone, to achieve the fluidity and precision of natural movement. It follows that a critical next step for neural prosthetics is the development of artificial proprioception. As a demonstration of the potential value of somatosensory feedback, it has been shown that including natural kinesthetic feedback improves BMI control in intact monkeys to near–natural levels
<sup>
<xref rid="R9" ref-type="bibr">9</xref>
</sup>
. The ideal artificial proprioceptive signal would be able to fill the same roles that proprioception plays in natural motor control: providing sufficient information to allow competent performance in the absence of other sensory inputs, and permitting multisensory integration with vision to reduce movement variability when both signals are available. Here we present a proof–of–concept study showing that both of these goals can be achieved using multichannel intracortical microstimulation (ICMS).</p>
<p id="P3">Most efforts to develop artificial sensory signals have taken a biomimetic approach: trying to recreate the patterns of neural activity that underlie natural somatosensation
<sup>
<xref rid="R10" ref-type="bibr">10</xref>
<xref rid="R14" ref-type="bibr">14</xref>
</sup>
. We propose a complementary approach, which focuses not on reproducing natural patterns of activity, but instead on taking advantage of the natural mechanisms of sensorimotor learning and plasticity. In particular, the process of multisensory integration, where multiple sensory signals are combined to improve the precision of sensory estimates, is learned from cross–modal experience during development
<sup>
<xref rid="R15" ref-type="bibr">15</xref>
,
<xref rid="R16" ref-type="bibr">16</xref>
</sup>
and relies on a continuous process of adaptive recalibration even in adult humans and monkeys
<sup>
<xref rid="R17" ref-type="bibr">17</xref>
<xref rid="R19" ref-type="bibr">19</xref>
</sup>
. Recent theoretical work from our lab suggests that multisensory integration can be learned with experience through a simple Hebbian–like learning rule
<sup>
<xref rid="R20" ref-type="bibr">20</xref>
</sup>
. In this model, successful integration of two sensory signals depends not so much on choosing the right patterns of neural activity to encode spatial information, but rather on the presence of spatiotemporal correlations between input signals, which allow downstream neurons to learn the common underlying cause, e.g. hand position.</p>
<p id="P4">Following these theoretical principles, we hypothesized that spatiotemporal correlations between a visual signal and novel artificial signal in a behavioral context would be sufficient for a monkey to learn to integrate the new modality. We tested this hypothesis by delivering real–time, artificial sensory feedback to monkeys via non–biomimetic patterns of ICMS across multiple electrodes in primary somatosensory cortex (S1). The monkeys ultimately learned to extract the task–relevant information from this signal and to integrate this information with natural sensory feedback.</p>
<sec sec-type="results" id="S1">
<title>RESULTS</title>
<sec id="S2">
<title>Behavioral task and feedback signals</title>
<p id="P5">Two rhesus macaques were trained to make instructed–delay center–out reaches to invisible targets (
<xref rid="F1" ref-type="fig">Fig. 1a</xref>
) in a virtual reality environment (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 1</xref>
) guided by feedback that represented the vector (distance and direction) from the middle fingertip to the reach target (
<xref rid="F1" ref-type="fig">Fig. 1b</xref>
). This “movement vector” was not explicitly shown; instead, it was encoded by one of three feedback types: a visual signal (VIS), a signal delivered through patterned multi–channel ICMS pulse trains (ICMS), or a combination of these two signals (VIS+ICMS).</p>
<p id="P6">This task was chosen to best test whether the ICMS signal could provide position information that could both be integrated with vision and could replace it. By using natural movement, we obtained the most direct and precise estimates of how well the ICMS signal encoded sensory information about the limb (e.g., not confounded by additional performance noise due to imperfect BMI control). However, natural movement leaves natural proprioception intact, which would have made an ICMS signal encoding absolute limb position redundant. By encoding the relative positions of the limb and target, the VIS and ICMS signals provided a feedback variable that was both required to complete the task and that was not available from natural sensory signals.</p>
<sec id="S3">
<title>Visual feedback</title>
<p id="P7">The VIS signal was a random moving–dot flow–field (“dot–field”), where the direction and speed of the flow indicated the direction and distance to the target, respectively (
<xref rid="F1" ref-type="fig">Fig. 1b</xref>
and
<xref rid="SD3" ref-type="supplementary-material">Supplementary Video 1</xref>
). The reliability, or precision, of the dot–field was manipulated by changing its coherence—the percentage of dots moving in the same direction.</p>
</sec>
<sec id="S4">
<title>ICMS feedback</title>
<p id="P8">Each monkey was chronically implanted with a high–density 96–electrode array in a region of primary somatosensory cortex that projects to higher cortical areas involved in visuomotor behavior such as reaching
<sup>
<xref rid="R21" ref-type="bibr">21</xref>
,
<xref rid="R22" ref-type="bibr">22</xref>
</sup>
(
<xref rid="F1" ref-type="fig">Fig. 1c</xref>
and
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 2</xref>
). For each array, we first selected a set of electrodes for which the monkey could detect ICMS pulse trains, assessed using a different task (see Online Methods). The ICMS signal, which was entirely novel to the animal, encoded the movement vector through the spatiotemporal patterns of biphasic current pulses across eight electrodes (Online Methods,
<xref rid="FD1" ref-type="disp-formula">Eqns. 1</xref>
and
<xref rid="FD2" ref-type="disp-formula">2</xref>
;
<xref rid="F1" ref-type="fig">Fig. 1d,e</xref>
and
<xref rid="SD3" ref-type="supplementary-material">Supplementary Video 2</xref>
). Movement vector direction was encoded by the relative rates of ICMS pulses across the set of electrodes: the pulse rate delivered on each electrode varied with the cosine of the angle between the instantaneous movement vector and the electrode’s “preferred direction” (Online Methods,
<xref rid="FD1" ref-type="disp-formula">Eqn. 1</xref>
), with the eight preferred directions spaced at 45° intervals around the circle and assigned independently of the response properties of the local neurons (
<xref rid="F1" ref-type="fig">Fig. 1c</xref>
). Movement vector distance was encoded by a linear scaling of the pulse rates on all electrodes (Online Methods,
<xref rid="FD2" ref-type="disp-formula">Eqn. 2</xref>
;
<xref rid="F1" ref-type="fig">Fig. 1d,e</xref>
).</p>
</sec>
</sec>
<sec id="S5">
<title>Learning ICMS</title>
<p id="P9">Monkeys first learned to perform the task with VIS feedback alone. We quantified performance with three behavioral metrics designed to capture how well the animals made use of a sensory signal during reach planning and execution. Performance on VIS–only trials increased monotonically with increasing dot–field coherence for all behavioral metrics (
<xref rid="F2" ref-type="fig">Fig. 2</xref>
), demonstrating that differences in performance reflect the precision of the sensory cue.</p>
<p id="P10">After the monkeys could perform the reaching task using visual feedback, we tested our hypothesis that spatiotemporal correlations between vision and ICMS could drive integration of the new sensory modality. We did so by exposing the monkeys to paired, correlated VIS+ICMS feedback signals both during the instructed delay period (as static information) and throughout the reach (dynamically updated feedback). The visual signal was first set to 100% coherence, but was gradually reduced across training blocks to increase the relative value of the ICMS signal (ultimately settling at 20% for Monkey F and 25% for Monkey D). Under this training regime, the animals learned to integrate the two sensory signals, i.e. the addition of ICMS improved behavioral performance (see below). Animals needed more explicit instruction to learn to initiate movement on ICMS–only trials (see Methods). Once that was accomplished, the training regime changed to include ICMS–only trials (33%), a pragmatic choice intended to speed learning. A summary of the behavioral training regime can be found in
<xref rid="SD2" ref-type="supplementary-material">Supplementary Tables 1 and 2</xref>
.</p>
<p id="P11">We periodically assessed learning (approximately every 500–1000 training trials) by including testing blocks: trials of VIS–only, VIS+ICMS, and ICMS–only trials where the visual dot–field coherence could take a range of values ([0,10,15,25,50] for Monkey F; [0,15,25,50,100] for Monkey D).</p>
</sec>
<sec id="S6">
<title>Substitution and augmentation of vision by ICMS</title>
<p id="P12">We analyzed the data from testing blocks to determine how well the animals could interpret the ICMS signal, using it in place of vision to perform accurate reaches. Once the monkeys began making reaches on ICMS–only trials, they became increasingly proficient across training sessions (
<xref rid="F3" ref-type="fig">Fig. 3</xref>
and
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 3</xref>
), and performance on ICMS trials was ultimately comparable to performance with low–to–mid visual coherences (15–25% for Monkey D, 15% for Monkey F;
<xref rid="F2" ref-type="fig">Fig. 2</xref>
and
<xref rid="SD2" ref-type="supplementary-material">Supplementary Tables 3 and 4</xref>
). A more qualitative impression of the performance comparisons can be obtained from sample movement paths for various feedback conditions (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 4</xref>
).</p>
<p id="P13">We next quantified how well the monkeys made use of the distance and direction information encoded in the ICMS signal. To do this, we analyzed the distance and direction of the initial movement segment of the reach, which reflected the animals’ estimates of the required movement vectors derived from sensory feedback during the instructed delay period (
<xref rid="F4" ref-type="fig">Fig. 4</xref>
). Although demonstrating some idiosyncratic biases (see below), monkey D was also able to derive good estimates of target direction from ICMS (ICMS, R
<sup>2</sup>
= 0.900;
<xref rid="F4" ref-type="fig">Fig. 4a</xref>
; 100% VIS, R
<sup>2</sup>
= 0.957;
<xref rid="F4" ref-type="fig">Fig. 4a</xref>
). Monkey F was highly adept at estimating target angle (ICMS, regression R
<sup>2</sup>
= 0.948;
<xref rid="F4" ref-type="fig">Fig. 4b</xref>
), performing as well with ICMS as with the highest visual coherences (50% VIS, R
<sup>2</sup>
= 0.945;
<xref rid="F4" ref-type="fig">Fig. 4b</xref>
). Both monkeys were somewhat worse at estimating distance than direction from ICMS. Due to differences in performance across the workspace, we analyze distance estimation separately for the two half–planes: the more proximal workspace, with target angles [−π, 0], and the more distal workspace, with target angles [0, π]. Monkey D could accurately estimate distance in the distal half of the workspace ([0, π], R
<sup>2</sup>
= 0.494;
<xref rid="F4" ref-type="fig">Fig. 4c</xref>
), but was less able to do so in the proximal half ([−π, 0], R
<sup>2</sup>
= 0.108;
<xref rid="F4" ref-type="fig">Fig. 4c</xref>
). Still, these values are comparable to those the animal achieved with the highest–coherence VIS feedback ([0, π], R
<sup>2</sup>
= 0.365; [−π, 0], R
<sup>2</sup>
= 0.176;
<xref rid="F4" ref-type="fig">Fig. 4c</xref>
). For monkey F, distance estimates were equally good across the workspace (gray symbols: [−π, 0], R
<sup>2</sup>
= 0.432; vermillion symbols: [0, π], R
<sup>2</sup>
= 0.473;
<xref rid="F4" ref-type="fig">Fig. 4d</xref>
) and largely fall within one target radius of correct distance (but not necessarily in the target on each trial, due to directional error), although these values are lower than those the animal achieved with high–coherence VIS feedback ([−π, 0], R
<sup>2</sup>
= 0.716; [0, π], R
<sup>2</sup>
= 0.751;
<xref rid="F4" ref-type="fig">Fig. 4d</xref>
). In summary, the task performance observed in the ICMS–only condition is driven by the animals’ ability to decode both distance and direction information from the ICMS signal.</p>
<p id="P14">In addition to serving as the training condition, VIS+ICMS trials provided a test of the animals’ ability to improve performance by combining information from the two sensory cues. This ability emerged during training, with performance on VIS+ICMS trials becoming progressively better than for VIS trials, even before the animals could complete reaches with ICMS alone (
<xref rid="F3" ref-type="fig">Fig. 3</xref>
and
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 3</xref>
), supporting the idea that multisensory integration drives ICMS learning
<sup>
<xref rid="R20" ref-type="bibr">20</xref>
</sup>
. Moreover, the hallmarks of multisensory integration were observed in the asymptotic performance on VIS+ICMS trials, after learning was complete (
<xref rid="F2" ref-type="fig">Fig. 2</xref>
). At intermediate dot–field coherences (10–25%), where performance between the two unimodal cues was similar, VIS+ICMS reaches were significantly better, i.e., faster and straighter (
<xref rid="SD2" ref-type="supplementary-material">Supplementary Table 3</xref>
). In contrast, at high (50–100%) and low (0%) dot–field coherences, behavior in the bimodal condition approximated that observed with the more reliable of the unimodal cues (
<xref rid="SD2" ref-type="supplementary-material">Supplementary Table 3</xref>
).</p>
</sec>
<sec id="S7">
<title>Minimum–variance integration of vision and ICMS</title>
<p id="P15">We next asked whether the visual and ICMS cues were integrated in an optimal, i.e. minimum variance manner, as is the case for natural visual and somatosensory signals
<sup>
<xref rid="R3" ref-type="bibr">3</xref>
,
<xref rid="R4" ref-type="bibr">4</xref>
</sup>
. The answer came from an analysis of the statistics of the initial reach directions, which gave the most direct readout of the animals’ estimate of target direction following the instructed delay period. The minimum variance model makes specific predictions about both the variance and bias of this estimate, and we consider each in turn.</p>
<p id="P16">We first considered how the variance of the initial reach angle depends on feedback condition (
<xref rid="F5" ref-type="fig">Fig. 5a</xref>
). For VIS trials, the initial angle variance increased dramatically with decreases in coherence, as expected if the variance reflects the residual uncertainty about cue direction after the instructed delay. Variability in the ICMS trial was comparable to VIS trials at 15–25% visual coherence, consistent with the other movement metrics above. From these unimodal variances, we determined what the initial angle variance should be for the VIS+ICMS condition, under the model of minimum variance integration
<sup>
<xref rid="R4" ref-type="bibr">4</xref>
</sup>
. We computed this prediction under two limiting conditions: assuming that the initial angle variance arises only from variability in the sensory estimates of target direction (
<xref rid="F5" ref-type="fig">Fig. 5a</xref>
), or assuming that the measurements also include the maximal consistent level of downstream (e.g., motor) variability (see Online Methods;
<xref rid="F5" ref-type="fig">Fig. 5a</xref>
). The empirical variances observed in the VIS+ICMS condition followed the predicted trend closely, and lay between the two limiting predictions in the region where the animal received most of the multisensory training (20–25% coherence; see Online Methods). This comparison suggested that after training, the animals optimally combined the ICMS signal with vision. We next tested this conclusion further by analyzing the pattern of mean initial angles.</p>
<p id="P17">The animals exhibited idiosyncratic patterns of mean initial reach angle as a function of target angle. For Monkey D, these patterns were clearly distinct between the VIS and ICMS trials (
<xref rid="F5" ref-type="fig">Fig. 5b</xref>
; also see
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 5</xref>
for Monkey F, where the patterns were less well defined). Since the required movements were the same across cue conditions, these patterns likely arise from biased estimation of the target direction. Therefore, they offered another opportunity to test whether the VIS and ICMS signals were combined optimally
<sup>
<xref rid="R23" ref-type="bibr">23</xref>
</sup>
. Minimum variance integration predicts that in the VIS+ICMS condition, as the visual coherence increases from 0% to 100%, the animals should transition from relying primarily on the ICMS cue to primarily on the visual cue. This trend could be seen qualitatively in the pattern of mean initial angles for Monkey D (
<xref rid="F5" ref-type="fig">Fig. 5b</xref>
): at 15% coherence, the VIS+ICMS mean was close to that observed with ICMS alone; at 100% it was close to that observed with VIS. The relative weighting of the two modalities were estimated quantitatively by modeling the VIS+ICMS mean as an affine combination of the unimodal biases (Online Methods,
<xref rid="FD5" ref-type="disp-formula">Eqn. 4a</xref>
). As expected, the weighting of the visual cue smoothly transitioned from zero to unity as the visual coherence increased (
<xref rid="F5" ref-type="fig">Fig. 5c</xref>
). Under the model of minimum variance integration, each sensory cue should be weighted inversely proportional to its variance (Online Methods,
<xref rid="FD6" ref-type="disp-formula">Eqn. 4b</xref>
). Using the unimodal variance data from
<xref rid="F5" ref-type="fig">Figure 5a</xref>
, we obtained quantitative predictions for the cue weighting in the VIS+ICMS trials (
<xref rid="F5" ref-type="fig">Fig. 5c</xref>
; Online Methods,
<xref rid="FD7" ref-type="disp-formula">Eqns. 5a</xref>
,
<xref rid="FD8" ref-type="disp-formula">b</xref>
); these were in good agreement with the empirical data.</p>
</sec>
</sec>
<sec sec-type="discussion" id="S8">
<title>DISCUSSION</title>
<p id="P18">We have shown that multi–channel patterned ICMS of primary somatosensory cortex can be used to provide monkeys with continuous information about hand position that enables goal–directed reaching. In particular, the monkeys were able to use ICMS to estimate the distance and direction between their current hand location and the reach target. Furthermore, when both visual and ICMS feedback was available, the monkeys combined these signals to achieve increased levels of task performance, and they did so at—or near—theoretical optimal levels, as is observed for natural sensory signals
<sup>
<xref rid="R3" ref-type="bibr">3</xref>
,
<xref rid="R4" ref-type="bibr">4</xref>
</sup>
.</p>
<sec id="S9">
<title>What does the ICMS signal convey?</title>
<p id="P19">An important finding of this study is that animals can learn to use ICMS as a temporally continuous feedback signal. However, it is possible that the animals only used the ICMS signal to estimate the target location during the instructed delay period, with subsequent corrective sub–movements guided either by the remembered target location or simply by a random search. An analysis of corrective sub–movements,
<xref rid="F6" ref-type="fig">Figure 6</xref>
, shows that this is not the case. Sub–movement direction correlated well with the direction of the online movement vector (target – current hand position) at the end of the previous sub–movement. In fact, the precision of the corrective movements in the ICMS–only condition was comparable to that seen with high visual coherences, and was considerably lower than that observed with 0% VIS, where no directional information is available, or that which would be expected by chance. These results suggest that the VIS and ICMS cues
<italic>were</italic>
being used as online feedback signals. Furthermore, if the animals had simply memorized the location of the target during the instructed delay period, we would expect a decline in precision across sequential corrective movements. Instead, the correlation between cued and executed sub–movements was largely consistent across corrective sub–movement number, with no clear increase in error variance for later sub–movements. These results strongly indicate that the animals are using the online feedback to execute corrective sub–movements.</p>
<p id="P20">Another key result of the paper is that with only eight electrodes, we were able to deliver continuous spatial information with a reliability comparable to that achieved with the visual cue. Initial angle estimation with the ICMS signal had the same variance as that observed with 15–20% visual coherence and was only about three times greater than that observed with the highest coherence for both animals. Furthermore, when the signals were used online, ICMS performance is even closer to that achieved with visual feedback (
<xref rid="F6" ref-type="fig">Fig. 6b</xref>
). The better performance (relative to vision) of ICMS during corrective movements could be due to either the inherent delays in visual feedback, the relative importance of somatosensory feedback for online movement controls, the shorter integration times available for online corrections, or a greater contribution of motor noise.</p>
</sec>
<sec id="S10">
<title>Learning to integrate a novel sensory signal</title>
<p id="P21">The ability to use the ICMS signal was necessarily learned during the training process, since, by design, the patterns of ICMS did not mimic naturally occurring signals in the brain. This learning could have been driven by several possible mechanisms of learning. Consider that, in this experiment, the visual and ICMS signals changed in a correlated fashion. Previous modeling work from our lab showed that in a network with unsupervised, Hebbian–like learning, such correlations are sufficient to learn optimal integration
<sup>
<xref rid="R20" ref-type="bibr">20</xref>
</sup>
; however, other learning mechanisms may also have contributed, including error–corrective or reinforcement learning
<sup>
<xref rid="R24" ref-type="bibr">24</xref>
,
<xref rid="R25" ref-type="bibr">25</xref>
</sup>
of a sensory–to–motor mapping from ICMS to the appropriate movement. Our experimental design cannot definitively distinguish between these possibilities, but the emergence of multisensory integration before animals could perform with ICMS alone (
<xref rid="F3" ref-type="fig">Fig. 3</xref>
and
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 3</xref>
) suggests that unsupervised, multisensory learning played a large, if not dominant, role.</p>
<p id="P22">While there is evidence of multisensory integration at all visual coherences (
<xref rid="F2" ref-type="fig">Fig. 2</xref>
), optimal performance in the VIS+ICMS condition was only observed at mid–level visual coherences; performance at the lowest and highest coherence levels only
<italic>approached</italic>
optimality (
<xref rid="F5" ref-type="fig">Fig. 5</xref>
). A likely explanation is that monkeys learned to integrate vision and ICMS optimally when vision matched or was close to the training coherence (20% for monkey F; 25% for monkey D). In fact, a similar effect was observed in the network model of unsupervised sensory integration
<sup>
<xref rid="R20" ref-type="bibr">20</xref>
</sup>
: small but apparent departures from optimal integration were seen when the unimodal variances deviated too far from the regime in which the network was trained. Those observations are qualitatively consistent with the experimental results we presented here.</p>
</sec>
<sec id="S11" sec-type="methods">
<title>ICMS feedback as a tool for studying multisensory circuits</title>
<p id="P23">Our result, that animals could integrate and use an ICMS signal to direct movement, offers a novel and potentially powerful tool for studying information processing in sensorimotor circuits. For example, the posterior parietal cortex uses sensory feedback for a variety of multisensory computations, including estimation of the position of the limb and the location of targets
<sup>
<xref rid="R26" ref-type="bibr">26</xref>
<xref rid="R30" ref-type="bibr">30</xref>
</sup>
. Computational models have been developed to demonstrate how neural circuits could perform these operations
<sup>
<xref rid="R20" ref-type="bibr">20</xref>
,
<xref rid="R31" ref-type="bibr">31</xref>
,
<xref rid="R32" ref-type="bibr">32</xref>
</sup>
, but testing these models has proven difficult. The challenge stems, in large part, from the fact that brain areas within the PPC exhibit complex, heterogeneous and partially redundant spatial representations
<sup>
<xref rid="R33" ref-type="bibr">33</xref>
<xref rid="R35" ref-type="bibr">35</xref>
</sup>
and interact in a complex network
<sup>
<xref rid="R22" ref-type="bibr">22</xref>
,
<xref rid="R36" ref-type="bibr">36</xref>
<xref rid="R39" ref-type="bibr">39</xref>
</sup>
, often with overlapping function
<sup>
<xref rid="R40" ref-type="bibr">40</xref>
,
<xref rid="R41" ref-type="bibr">41</xref>
</sup>
. It may not be possible to discover how information is processed within this complex circuit only by manipulating the distal sensory inputs. ICMS feedback, on the other hand, affords the experimenter proximal control of the afferent signal. This has two pertinent advantages. First, the anatomical origin of the signal can be controlled. In this particular study, we stimulated areas of somatosensory cortex that we know project directly to multisensory areas as such area 5 and VIP
<sup>
<xref rid="R42" ref-type="bibr">42</xref>
,
<xref rid="R43" ref-type="bibr">43</xref>
</sup>
; however, stimulation could be performed in other brain areas that participate more strongly in other cortical circuits, e.g., in area 7 to study spatial representations in the circuits for saccadic eye movements. Second, because the ICMS signal bypasses peripheral receptors and subcortical processing, it gives the experimenter finer control over the statistics of the signal. Signal statistics play a big role in current models of multisensory neuronal processing
<sup>
<xref rid="R20" ref-type="bibr">20</xref>
,
<xref rid="R32" ref-type="bibr">32</xref>
</sup>
, but the signal manipulations that would be most diagnostic for these models would be challenging or impossible to achieve with natural stimulation. With ICMS feedback, the manipulations become tractable: for example, changing the correlational structure between neurons in a given area or between neurons representing two sensory signals, and gaining full control over the timing and context of subjects’ exposure to the signal.</p>
<p id="P24">Furthermore, we think the learning observed in the present study taps into the same mechanisms of plasticity that drives other forms of multisensory learning, such as intersensory calibration—the ability of two sensory modalities to come back into alignment following a perturbation
<sup>
<xref rid="R17" ref-type="bibr">17</xref>
,
<xref rid="R19" ref-type="bibr">19</xref>
,
<xref rid="R39" ref-type="bibr">39</xref>
,
<xref rid="R44" ref-type="bibr">44</xref>
</sup>
. ICMS feedback offers an ideal tool for studying these mechanisms, e.g., for testing models of learning
<sup>
<xref rid="R20" ref-type="bibr">20</xref>
</sup>
in real neural circuits, for the reasons described above – local access and control of statistics—and because we can observe changes in the behavioral and electrophysiological state of the animal from the very first exposure to the novel signal.</p>
</sec>
<sec id="S12">
<title>Neuroprosthetic applications</title>
<p id="P25">In this study, we show that ICMS can be used to deliver a task–relevant feedback signal that guides online, multi–dimensional movement control. We chose to encode the position of the hand relative to the reach target, as opposed to an absolute proprioceptive signal, because this allowed us to study artificial sensation in a simple, natural task, without having to suppress natural proprioception. Because the reach target was never visible, estimation of the relative hand position was required in order to perform the task. To use this approach to provide proprioceptive feedback from a prosthetic device, the ICMS signal would instead encode the state of the device with respect to the body, for example joint or endpoint position or velocity. Because these variables are also available via visual feedback, the same learning mechanisms should apply.</p>
<p id="P26">We expect that ICMS feedback could play the same role for BMI control that proprioceptive feedback does for normal movement control. During natural movement, vision and proprioception make comparable contributions to limb state estimation
<sup>
<xref rid="R1" ref-type="bibr">1</xref>
<xref rid="R3" ref-type="bibr">3</xref>
,
<xref rid="R45" ref-type="bibr">45</xref>
</sup>
. While proprioceptive loss does not have a substantial effect on the simplest reaches when visual feedback is available
<sup>
<xref rid="R46" ref-type="bibr">46</xref>
</sup>
, it does impair performance of movements that require inter–joint coordination, which include most activities of daily life
<sup>
<xref rid="R7" ref-type="bibr">7</xref>
</sup>
. We have shown that the eight–channel ICMS signal used here can provide online feedback with reliability that is also comparable to vision. We expect that increasing the number of electrodes and optimizing the encoding scheme will further improve the quality and information capacity of this signal.</p>
<p id="P27">Our learning–based approach can be contrasted with a biomimetic approach—the attempt to reproduce natural patterns of sensory–evoked neural activity. In practice, a truly biomimetic stimulation scheme is not attainable, due to a range of technical and scientific considerations, such as the lack of access to the full neuronal population, inability to translate electrical stimulation into naturalistic neural activity patterns, an incomplete understanding of neural encoding mechanisms, and difficulty in characterizing those patterns in patients with sensory loss. Our approach circumvents these issues by taking advantage of the inherent plasticity of the brain. On the other hand, completely disregarding prior knowledge about natural neural signals, such as somatotopic organization, will impair initial performance and may limit the rate or extent of learning. In fact, learning–based and biomimetic approaches
<sup>
<xref rid="R10" ref-type="bibr">10</xref>
<xref rid="R14" ref-type="bibr">14</xref>
</sup>
are highly complementary, and systems optimized for clinical applications would likely benefit from taking a combined approach.</p>
</sec>
</sec>
<sec id="S13" specific-use="web-only">
<title>ONLINE METHODS</title>
<sec id="S14" sec-type="subjects">
<title>Subjects and Implants</title>
<p id="P28">All animal procedures were performed in accordance with the National Research Council’s Guide for the Care and Use of Laboratory Animals and were approved by the UCSF Institutional Animal Care and Use Committee. Two adult male rhesus macaque monkeys (
<italic>Macaca mulatta</italic>
) participated in this study. No statistical methods were used to pre-determine this sample size. Rather, we chose to use two animals because the goal of this study was to demonstrate a novel form of learning, and we were able to show statistical significant effects of both learning and sensory integration independently in each animal. Clear demonstration of such learning effects in two macaque monkeys meets the generally employed standard in the field.</p>
<p id="P29">Each animal was chronically implanted with a 96–channel silicon microelectrode array coated with Iridium Oxide (Blackrock Microsystems, Salt Lake City, UT) over their left primary somatosensory cortices (Brodmann Areas 1, 2; S1). The cells on monkey F’s array had receptive fields spanning the shoulder, back, side of the head, ear and occiput (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 2</xref>
) whereas for monkey D most receptive fields spanned the arm and shoulder (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 2</xref>
).</p>
</sec>
<sec id="S15">
<title>Behavioral Task</title>
<p id="P30">The animals were trained to perform reaches in the horizontal plane to an unseen target in a two–dimensional virtual reality environment, where a mirror and an opaque barrier prevented direct vision of the arm (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 1</xref>
). The mirror reflected visual input from a projector, so that the visual cues appeared in the horizontal plane of the reaching hand. Fingertip position was monitored with an electromagnetic position sensor (Polhemus Liberty, Colchester, VT) at 240 Hz.</p>
<p id="P31">Each trial consisted of four epochs (
<xref rid="F1" ref-type="fig">Fig. 1a</xref>
). i) The monkeys moved the middle fingertip of their right hand to a fixed start position, located in the center of the screen and indicated by a circular visual target (10 mm radius). ii) After a brief delay (0.25 and 0.5 s for Monkeys D and F, respectively), the target cue was initiated, indicating the movement vector between the monkey’s current finger position and the center of the unseen reach target (12 mm radius). Targets were selected uniformly from an annulus centered on the start target with an inner radius of 40 mm and an outer radius of 115 mm (80 mm for Monkey D). The movement vector cue was provided in the form of a dot–field (VIS), multichannel ICMS (ICMS), or both (VIS+ICMS). The monkeys were required to hold their position during this instructed–delay interval (0.2–0.7 s and 1–1.5 s, monkeys D and F, respectively). iii) After a go cue (750 Hz tone, 0.5 s) the monkeys made a reach under the guidance of continuously updating VIS, ICMS or VIS+ICMS feedback. iv) After acquiring the target and holding for 400 ms (monkey D) or 500 ms (monkey F), the monkeys received a liquid reward. Trials were terminated without reward if the monkeys moved too early during any of the delay intervals or if they failed to reach the target before a timeout (10 s). Different task parameters were selected for each animal to minimize the number of failed trials (e.g., aborted hold at start or target), and therefore reflect the animals’ idiosyncratic behavioral tendencies.</p>
</sec>
<sec id="S16">
<title>Visual feedback</title>
<p id="P32">For vision, the movement vector was encoded using a random moving–dot flow–field (“dot–field”) consisting of approximately 600 dots over the visual display (roughly 53 cm x 33 cm, in the reaching plane). Each dot was initialized to a random location on screen, and had a lifetime of 4 seconds (phases randomized), after which it reappeared at a new random location. Each dot in the dot–field moved at the same angle as the movement vector and at a speed proportional to the length of the movement vector, but could not exceed a maximum of 50 cm/s for Monkey D and 40 cm/s for Monkey F. A percentage of the dots moved coherently together in the direction of the continuously updating movement vector. The remaining dots moved in random directions, selected independently and uniformly from the circle. The percentage of dots moving coherently—the dot field coherence—was parametrically varied in order to manipulate the precision of the visual feedback.</p>
</sec>
<sec id="S17">
<title>ICMS</title>
<p id="P33">Intracortical microstimulation consisted of biphasic, charge–balanced pulse trains delivered asynchronously to each of eight electrodes in the array. The pulse trains were cathode–leading and symmetric, with 200 μs/phase and a 250 μs phase separation. The pulse amplitudes varied across electrodes, depending on perceptual threshold (see below) and ranged between 34–60 μA for Monkey D, and 30–80 μA for Monkey F.</p>
<sec id="S18">
<title>ICMS Detection</title>
<p id="P34">A preliminary two–alternative forced choice task was used to determine the threshold pulse amplitudes at which the animals could detect ICMS on a given electrode. The monkeys first moved to a fixed start position near the midline (as in main task above) and maintained that position for 0.5 s. Next, there was a 0.5 s instructed delay period during which two reach targets were displayed, to the right and left of midline. The presence of an ICMS pulse train (100 Hz, 0.5 s) cued the animal to reach left; its absence cued the rightward reach. Animals were initially trained on this task using multi–electrode stimulation, and the task was then used to identify electrodes on which ICMS was detectable. Eight such electrodes were identified for Monkey F. For Monkey D, seven such electrodes were identified; the final electrode could not be detected when stimulated alone with amplitudes of less than 60 μA.</p>
</sec>
<sec id="S19">
<title>Movement Vector Encoding</title>
<p id="P35">For ICMS, the movement vector was encoded in the spatial and temporal patterns of stimulation across the array (
<xref rid="F1" ref-type="fig">Fig. 1</xref>
). Movement vector direction was encoded by the relative stimulation pulse rates across the electrodes. First, each of the eight electrodes was arbitrarily assigned one of eight preferred directions (
<italic>PD</italic>
), equally spaced around the circle. Then, the stimulation pulse rate
<italic>f
<sub>i</sub>
</italic>
of electrode
<italic>i</italic>
was calculated as a function of angle between the direction movement vector,
<italic>θ</italic>
, and the electrode’s assigned PD,</p>
<disp-formula id="FD1">
<label>(1)</label>
<mml:math id="M1" display="block" overflow="scroll">
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:mo>cos</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>θ</mml:mi>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mspace width="0.38889em"></mml:mspace>
<mml:mspace width="0.38889em"></mml:mspace>
<mml:mspace width="0.38889em"></mml:mspace>
<mml:mi>i</mml:mi>
<mml:mo></mml:mo>
<mml:mo stretchy="false">{</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mn>8</mml:mn>
<mml:mo stretchy="false">}</mml:mo>
<mml:mo>.</mml:mo>
</mml:math>
</disp-formula>
<p id="P36">The frequency scaling factor,
<italic>f
<sub>0</sub>
</italic>
, linearly encoded the movement vector distance,
<italic>d</italic>
, within the range of 100–300 Hz:
<disp-formula id="FD2">
<label>(2)</label>
<mml:math id="M2" display="block" overflow="scroll">
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mn>300</mml:mn>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mspace width="0.16667em"></mml:mspace>
<mml:mi>d</mml:mi>
<mml:mo>/</mml:mo>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mo>max</mml:mo>
</mml:msub>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mo>max</mml:mo>
</mml:msub>
<mml:mfrac>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mo>max</mml:mo>
</mml:msub>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="right">
<mml:mi mathvariant="italic">otherwise</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mn>100</mml:mn>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mspace width="0.16667em"></mml:mspace>
<mml:mi>d</mml:mi>
<mml:mo>/</mml:mo>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mo>max</mml:mo>
</mml:msub>
<mml:mo><</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:math>
</disp-formula>
where
<italic>d</italic>
<sub>max</sub>
was 10 cm for Monkey F and 13.6 cm for Monkey D. The values of
<italic>d</italic>
and
<italic>θ</italic>
were continuously updated during the reach to provide online feedback (
<xref rid="F1" ref-type="fig">Fig. 1d,e</xref>
). The range of
<italic>f</italic>
<sub>0</sub>
was restricted to the range [100, 300] to ensure that the monkeys could always detect the ICMS signal.</p>
</sec>
</sec>
<sec id="S20">
<title>Experimental Design</title>
<p id="P37">Behavioral sessions were divided into
<italic>training blocks</italic>
and
<italic>testing blocks</italic>
. The details of those blocks changed during the course of training, as described here and in
<xref rid="SD2" ref-type="supplementary-material">Supplementary Tables 1 and 2</xref>
.</p>
<sec id="S21">
<title>Behavioral Training</title>
<p id="P38">Monkeys first learned to perform the behavioral task using vision alone, with 100% coherence. Next, we began training with only VIS+ICMS trials. A dot–field coherence of 100% was used initially, and that value was slowly decreased across sessions to encourage the animals to use to the ICMS signal. Coherence values were lowered to 25% for Monkey D and 20% for Monkey F, depending on the animal’s performance level in the VIS conditions. This training regime was employed for approximately 20,000 training trials with Monkey D and 40,000 training trials with Monkey F, at which point the animals showed clear evidence of sensory integration of the VIS and ICMS signals—improved performance on VIS+ICMS trials compared to VIS trials, as evaluated on testing blocks (see below). We then altered the training regime to include 33% ICMS–only trials and 67% VIS+ICMS trials (see
<xref rid="SD2" ref-type="supplementary-material">Supplementary Tables 1 and 2</xref>
) once the animals were able to perform ICMS–only trials in the testing blocks (see below).</p>
</sec>
<sec id="S22">
<title>Behavioral Testing</title>
<p id="P39">In between blocks of training, approximately every ~500–1000 training trials, the animals performed a testing block to quantify performance across all feedback conditions. By the end of the experiment, a total of 11 feedback conditions were used: VIS, VIS+ICMS, and ICMS, with dot–field coherences of [0, 15, 25, 50, 100]% for monkey D and [0, 10, 15, 25, 50]% for monkey F, with the difference between animals reflecting individual performance levels in the VIS condition. At the beginning of the experiment, animals had not been exposed to lower visual coherences and could not perform the task at those coherences. Lower visual coherences for testing were introduced gradually during the course of the experiment as performance improved (see
<xref rid="F3" ref-type="fig">Fig. 3</xref>
and
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 3</xref>
). For all testing blocks, all conditions were randomized across trials.</p>
<p id="P40">As noted above, testing sessions revealed evidence of sensory integration of the VIS and ICMS signals prior to the animals performing the ICMS–only task. Yet at this stage, the animals still failed to initiate movement on ICMS–only trials, suggesting that they did not generalize the task instructions to trials with no visual cues. We therefore temporarily modified the ICMS–only testing protocol to help them generalize (
<xref rid="SD2" ref-type="supplementary-material">Supplementary Tables 1 and 2</xref>
). First, we paired ICMS with a visible target circle during ICMS–only testing trials. The brightness of the target circle was gradually decreased, until it was finally removed entirely, and the monkeys were reaching with ICMS alone (
<xref rid="F3" ref-type="fig">Fig. 3</xref>
and
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 3</xref>
). Next, the radius of the reach target for ICMS–only trials was temporarily increased from 12 mm to 36 mm—for both testing and training trials—to avoid discouraging the animal from performing these trials. The radius of the target was then decreased across training sessions until the monkey was completing reaches to a standard 12 mm radius target (
<xref rid="F3" ref-type="fig">Fig. 3</xref>
and
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 3</xref>
).</p>
</sec>
</sec>
<sec id="S23">
<title>Data Analysis</title>
<sec id="S24">
<title>Behavioral Performance Measures</title>
<p id="P41">We quantified the animals’ ability to use the various sensory cues with the following performance measures (details of each are given below): i) percent correct trials, ii) number of movement sub–segments, iii) normalized movement time, iv) normalized path length, v) mean and variance of initial angle of the movement. The first four metrics assessed animals’ use of the sensory cues throughout the trial, including movement planning during the instructed delay period and online movement control. The statistics of the initial angle assess only movement planning. The performance summaries in
<xref rid="F2" ref-type="fig">Figure 2</xref>
were computed from the last seven testing sessions available for each monkey.</p>
<list list-type="roman-lower" id="L1">
<list-item>
<p id="P42">
<italic>Percent correct trials:</italic>
This is the number of trials in which a monkey acquired a target and received a reward, compared to the total number trials in which a reach movement was initiated. We exclude in this analysis errors due to reaches beginning before the go cue, not initiating a trial, and etc.</p>
</list-item>
<list-item>
<p id="P43">
<italic>Number of movement sub–segments:</italic>
This metric quantifies the number of discrete sub–movements in a trial. Starting with the model assumption that sub–movements have bell–shaped velocity profiles
<sup>
<xref rid="R47" ref-type="bibr">47</xref>
</sup>
we identified sub–movements by threshold crossings of the radial velocity plot of a trajectory, with a threshold of 20% of the maximum velocity on a given trial.</p>
</list-item>
<list-item>
<p id="P44">
<italic>Normalized movement time:</italic>
Since maximum movement velocity was largely independent of movement distance (data not shown), targets that were farther away took longer to reach. Therefore, we normalized the movement time by the distance from the starting point to the target.</p>
</list-item>
<list-item>
<p id="P45">
<italic>Normalized path length:</italic>
Similarly, we normalized the integrated path length by the distance from the starting point to the target.</p>
</list-item>
<list-item>
<p id="P46">
<italic>Mean and variance of the initial angle.</italic>
For each monkey and feedback condition, we first computed a smoothed estimate of the mean initial angle as a function of target angle (robust locally weighted scatterplot smoothing, using the MATLAB
<italic>smooth</italic>
function, with a window of 40 data points). The initial angle variance was computed about this mean. Standard errors for the mean variance were estimated via bootstrapping
<sup>
<xref rid="R48" ref-type="bibr">48</xref>
</sup>
.</p>
</list-item>
</list>
</sec>
<sec id="S25">
<title>Quantifying use of direction and distance from the ICMS signal</title>
<p id="P47">A typical reach consisted of a long initial movement segment followed by one or more shorter, corrective sub–movements. The distance and direction of this initial reach can be taken to reflect the monkey’s estimate of target distance and direction, as decoded from the sensory information available during the instructed delay period.</p>
<list list-type="roman-lower" id="L2">
<list-item>
<p id="P48">
<italic>Direction estimation</italic>
: We assessed the monkeys’ ability to estimate target direction from ICMS by regressing initial movement angle against target angle for ICMS–only trials. The initial movement angle was measured using the first movement sub–segment, as described above. This assay ignores target–dependent biases in initial direction (see above), and is therefore a conservative estimate of the animal’s ability to decide target direction.</p>
</list-item>
<list-item>
<p id="P49">
<italic>Distance estimation</italic>
: We assessed the monkeys’ ability to estimate target distance from ICMS by regressing initial movement distance against target distance for ICMS–only trials. On a subset of trials, however, the initial movement deviated from the norm: animals sometimes made a small initial reach that was followed by several larger corrections. On these trials the distance of the initial reach segment was uncorrelated with movement distance. Therefore, for this analysis we exclude trials for which the first movement segment was not the longest segment. This occurred in 30.2% of the trials for Monkey D and 12.2% of the trials for Monkey F.</p>
</list-item>
</list>
</sec>
<sec id="S26">
<title>Model prediction for initial angle variance</title>
<p id="P50">Under the model of minimum variance sensory integration
<sup>
<xref rid="R8" ref-type="bibr">8</xref>
</sup>
, we could predict the sensory variability in the bimodal condition from the variability for each unimodal condition. We focused on variability in the animals’ estimate of the target angle based on the sensory cues during the instructed delay period. Unimodal variances were computed from the variability in initial movement direction,
<inline-formula>
<mml:math id="M3" overflow="scroll">
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="M4" overflow="scroll">
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">ICMS</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:math>
</inline-formula>
, calculated as described for performance measure (v) above. Next, for each coherence level, we predicted the bimodal variance under two limiting assumptions. First, we used the raw initial angle variance directly, which implicitly assumes that all movement variability derives from sensory variability:
<disp-formula id="FD3">
<label>(3a)</label>
<mml:math id="M5" display="block" overflow="scroll">
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">ICMS</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">ICMS</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">ICMS</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:math>
</disp-formula>
</p>
<p id="P51">Second, we assumed that the movement variability was the largest possible value that would still be consistent with the data—this is the smallest initial angle variance across conditions, which we denote with
<inline-formula>
<mml:math id="M6" overflow="scroll">
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">MIN</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:math>
</inline-formula>
. Under this model, the bimodal initial angle variance is:
<disp-formula id="FD4">
<label>(3b)</label>
<mml:math id="M7" display="block" overflow="scroll">
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">ICMS</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>-</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">MIN</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">ICMS</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>-</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">MIN</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">ICMS</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>-</mml:mo>
<mml:mn>2</mml:mn>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">MIN</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">MIN</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>.</mml:mo>
</mml:math>
</disp-formula>
</p>
</sec>
<sec id="S27">
<title>Model predictions for mean initial angle</title>
<p id="P52">The plots of mean initial angle for monkey D show a clear dependence on the feedback type (
<xref rid="F5" ref-type="fig">Fig. 5b</xref>
). If we suppose that these differences reflect biases in the sensory estimates of target direction, then the minimum variance model can be used to predict, for each coherence value, the mean initial angle in the VIS+ICMS trials from those measured in the unimodal trials:
<disp-formula id="FD5">
<label>(4a)</label>
<mml:math id="M8" display="block" overflow="scroll">
<mml:msub>
<mml:mover accent="true">
<mml:mi>θ</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">ICMS</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mover accent="true">
<mml:mi>θ</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mi mathvariant="italic">VIS</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">ICMS</mml:mi>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mover accent="true">
<mml:mi>θ</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mi mathvariant="italic">ICMS</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">ICMS</mml:mi>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:math>
</disp-formula>
</p>
<p id="P53">
<xref rid="FD5" ref-type="disp-formula">Equation 4</xref>
can be summarized by the predicted visual cue weighting for each coherence,</p>
<p id="P54">
<disp-formula id="FD6">
<label>(4b)</label>
<mml:math id="M9" display="block" overflow="scroll">
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi mathvariant="italic">ICMS</mml:mi>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:math>
</disp-formula>
which depends only on the unimodal initial angle variances.</p>
<p id="P55">We compared these model predictions to empirical values of the visual cue weighting, estimated from the mean initial angles. First, we divided the workspace into octants, and for each octant and feedback condition, we computed the mean difference between the initial angle and the target angle, we which denote here as
<italic>δ̄
<sub>i</sub>
</italic>
<sub>,</sub>
<italic>
<sub>x</sub>
δ̄
<sub>i</sub>
</italic>
<sub>,</sub>
<italic>
<sub>x</sub>
</italic>
for octant
<italic>i</italic>
and condition
<italic>x</italic>
. For each octant and coherence level, we then estimated the visual cue weighting as:
<disp-formula id="FD7">
<label>(5a)</label>
<mml:math id="M10" display="block" overflow="scroll">
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="italic">VIS</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>δ</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="italic">VIS</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="italic">ICMS</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>δ</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="italic">ICMS</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>δ</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="italic">VISS</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>δ</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="italic">ICMS</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:math>
</disp-formula>
</p>
<p id="P56">The standard error,
<italic>s
<sub>i</sub>
</italic>
<sub>,</sub>
<italic>
<sub>VIS</sub>
s
<sub>i</sub>
</italic>
<sub>,</sub>
<italic>
<sub>VIS</sub>
</italic>
of each
<italic>w
<sub>i</sub>
</italic>
<sub>,</sub>
<italic>
<sub>VIS</sub>
w
<sub>i</sub>
</italic>
<sub>,</sub>
<italic>
<sub>VIS</sub>
</italic>
was estimated from the standard errors of the component means,
<italic>δ̄
<sub>i</sub>
</italic>
<sub>,</sub>
<italic>
<sub>x</sub>
δ̄
<sub>i</sub>
</italic>
<sub>,</sub>
<italic>
<sub>x</sub>
</italic>
, by propagation of errors. Finally, for each coherence level we computed the mean visual cue weighting across octants, with each octant weighted by its standard error:</p>
<disp-formula id="FD8">
<label>(5b)</label>
<mml:math id="M11" display="block" overflow="scroll">
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi mathvariant="italic">VIS</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mn>8</mml:mn>
</mml:munderover>
</mml:mstyle>
<mml:mrow>
<mml:msubsup>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="italic">VIS</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="italic">VIS</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mn>8</mml:mn>
</mml:munderover>
</mml:mstyle>
<mml:msubsup>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="italic">VIS</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:math>
</disp-formula>
</sec>
</sec>
</sec>
<sec sec-type="supplementary-material" id="S28">
<title>Supplementary Material</title>
<supplementary-material content-type="local-data" id="SD1">
<label>1</label>
<caption>
<p id="P59">
<bold>Supplementary Figure 1</bold>
. Virtual reality environment. Animals sit in a virtual reality environment without direct view of the arm. A mirror reflects images on rear–project screen and is adjusted so that visual cues appear in the horizontal plane of the reaching hand. Hand position was tracked electromagnetically (Polhemus Liberty, Colchester, VT), and feedback about the position of the hand, relative to the target, was delivered via a random–dot visual flow field (inset) or via patterned ICMS.</p>
<p id="P60">
<bold>Supplementary Figure 2</bold>
. Physiological properties of stimulated somatosensory cortex. Location of electrode arrays within S1 (right) and example neuronal receptive fields (left) for Monkey D (top) and Monkey F (bottom). Colored circles (right) indicate array locations corresponding to the matching colored receptive fields (left). Neurons responded mainly to light touch; circles with dark borders correspond to cells that responded to limb movements (active and passive).</p>
<p id="P61">
<bold>Supplementary Figure 3</bold>
. Evolution of performance over training (Monkey D). Behavioral performance measures are shown as a function of the cumulative number of VIS+ICMS trials performed (training and testing). The data, collected during testing sessions, were smoothed for clarity (Gaussian window with standard deviation of 2.8 training sessions, translating to approximately 2,500 training trials for Monkey D). The visual coherence on training trials was decreased across training sessions (indicated by gray bars at the bottom of the figure and vertical gray lines at the transitions). The left, thin green line denotes the onset of ICMS–only trials, where target sizes were temporarily larger than in the other trial conditions; the right, thick green line denotes the beginning of ICMS–trials with targets of standard size. (
<bold>a</bold>
) percent correct trials; (
<bold>b</bold>
) number of movement segments measured online error corrections; (
<bold>c</bold>
) movement time for the trial is normalized by the initial distance to the reach target; (
<bold>d</bold>
) path length, normalized as in
<bold>c</bold>
. See
<xref rid="SD2" ref-type="supplementary-material">Supplementary Table 1</xref>
for additional details on the training and testing schedule.</p>
<p id="P62">
<bold>Supplementary Figure 4</bold>
. Sample movement paths from randomly selected successful trials for Monkeys D (
<bold>a</bold>
) and F (
<bold>b</bold>
) for seven feedback conditions. Each reach begins at the fixed central starting point and ends within the unseen reach target (here depicted in gray for clarity).</p>
<p id="P63">
<bold>Supplementary Figure 5</bold>
. Additional analyses of initial angle. (
<bold>a</bold>
), Standard deviation of initial angle (relative to target angle) for the different trial types and visual coherences. Plots follow the same conventions as Figure 5a in the main text. This figure demonstrates that the qualitative results of the main text—in particular the good correspondence with the minimum variance model of sensory integration—are not an artifact of the subtracting the smoothed estimates of mean initial angle used there. Error bars denote standard error of the mean. (
<bold>b</bold>
) Smoothed mean initial angle for Monkey F, relative to the target angle. Monkey F did not exhibit the marked differences in mean initial angle across feedback types that were observed for Monkey D (
<xref rid="F4" ref-type="fig">Figure 4a</xref>
, main text). (
<bold>c</bold>
) Visual cue weighting for Monkey F in the VIS+ICMS trials, as a function of dot–field coherence. The results are consistent with the minimum variance model, however the analysis has poor statistical power due to the similarity in mean initial angle across feedback types. Blue filled circles: visual cue weighting estimated from data; black unfilled circles: minimum variance model prediction; error–bars: bootstrapped estimates of standard error.</p>
</caption>
<media xlink:href="NIHMS639101-supplement-1.doc" orientation="portrait" xlink:type="simple" id="d37e1639" position="anchor"></media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SD2">
<label>2</label>
<caption>
<p id="P64">
<bold>Supplementary Table 1:</bold>
Monkey D training schedule (complements
<xref rid="SD1" ref-type="supplementary-material">Supplementary Figure 3</xref>
). Trial numbers are rounded to nearest 1000.</p>
<p id="P65">
<bold>Supplementary Table 2:</bold>
Monkey F training schedule (complements
<xref rid="F3" ref-type="fig">Figure 3</xref>
). Trial numbers are rounded to nearest 1000.</p>
<p id="P66">
<bold>Supplementary Table 3.</bold>
Comparison of behavioral metrics across feedback types. Significant differences between conditions are circled, the color of which denotes the modality with the significantly lower value. The p–values (shown here) were calculated using a two–sided, non–parametric permutation test (N
<sub>permutations</sub>
= 10,000) and were corrected for multiple comparisons using the Holm–Bonferroni method (α = 0.05, m = 15). In the tables below n (VIS/VIS+ICMS/ICMS) refers to the number of samples for each condition (trial type by visual coherence). (
<bold>a</bold>
) Number of movement sub–segments, (
<bold>b</bold>
) Normalized path length, and (
<bold>c</bold>
) Normalized movement time.</p>
<p id="P67">
<bold>Supplementary Table 4.</bold>
Comparison of initial movement angle variances across feedback types. Significant differences between conditions are circled, the color of which denotes the modality with the significantly lower value. The p–values (shown here) were calculated using a two–sided, non–parametric permutation test (N
<sub>permutations</sub>
= 10,000) and were corrected for multiple comparisons using the Holm–Bonferroni method (α = 0.05, m = 15). In the tables below n (VIS/VIS+ICMS/ICMS) refers to the number of samples for each condition (trial type by visual coherence). (
<bold>a</bold>
) Biased initial angle variance (as in
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 5a</xref>
), and (
<bold>b</bold>
) Unbiased initial angle variance (as in
<xref rid="F5" ref-type="fig">Fig. 5a</xref>
).</p>
</caption>
<media xlink:href="NIHMS639101-supplement-2.pdf" orientation="portrait" xlink:type="simple" id="d37e1693" position="anchor"></media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SD3">
<label>3</label>
<caption>
<p id="P57">
<bold>Supplementary Video 1.</bold>
Sample trial of Monkey F reaching with VIS+ICMS feedback. This video is generated from behavioral data collected on 29 Dec., 2013. The left panel shows a simulated version of a portion of the virtual reality environment that the monkey viewed during the trial: position of the fingertip (filled white circle), dot field (here displayed at 100% coherence for clarity; the coherence presented to the monkey for this trial was 50%), and start target (open green circle, radius 10 mm). The right panel shows the patterns of ICMS delivered during the trial, where each vertical line denotes a pulse of stimulation from an electrode with a preferred direction indicated by the corresponding red arrow at left. Stimulation rasters shown have been subsampled for clarity. In the video, the monkey is shown first acquiring the start target. After an instructed delay interval, during which VIS+ICMS information about the instructed movement vector become available, a go cue sounds (noted by text), and the monkey completes the reach to the unseen, 12 mm radius reach target (illustrated here with dashed white circle).</p>
<p id="P58">
<bold>Supplementary Video 2.</bold>
Sample trial of Monkey F reaching with only ICMS feedback. This video is generated from behavioral data collected on 29 Dec., 2013. The left panel shows a simulated version of a portion of the virtual reality environment that the monkey viewed during the trial: position of the fingertip (filled white circle), dot field (here displayed at 100% coherence for clarity; the coherence presented to the monkey for this trial was 50%), and start target (open green circle, radius 10 mm). The right panel shows the patterns of ICMS delivered during the trial, where each vertical line denotes a pulse of stimulation from an electrode with a preferred direction indicated by the corresponding red arrow at left. Stimulation rasters shown have been subsampled for clarity. In the video, the monkey is shown first acquiring the start target. After an instructed delay interval, during which ICMS information about the instructed movement vector become available, a go cue sounds (noted by text), and the monkey completes the reach to the unseen, 12 mm radius reach target (illustrated here with dashed white circle).</p>
</caption>
<media xlink:href="NIHMS639101-supplement-3.pdf" orientation="portrait" xlink:type="simple" id="d37e1706" position="anchor"></media>
</supplementary-material>
</sec>
</body>
<back>
<ack id="S29">
<p>We thank M.R. Fellows for initial behavioral training; R.R. Torres for suggesting the 2AFC task for ICMS detection; A. Leggitt for help with data analysis; K.B. Andrews and K. MacLeod for animal–related support; and J.G. Makin, A. Yazdan-Shahmorad, T.L. Hanson for insightful discussion and comments on the manuscript. This research was supported by the Defense Advanced Research Projects Agency (DARPA) Reorganization and Plasticity to Accelerate Injury Recovery (REPAIR; N66001–10–C–2010) and the NIH NEI (EY015679).</p>
</ack>
<fn-group>
<fn id="FN1" fn-type="con">
<p>
<bold>Author Contributions:</bold>
MCD and PNS designed the experiments; MCD and JEO developed and tested multi–electrode stimulation capabilities, including behavioral validation; MCD performed the experiments; MCD and PNS analyzed the data; MCD, PNS, and JEO wrote the manuscript.</p>
</fn>
</fn-group>
<ref-list>
<ref id="R1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sober</surname>
<given-names>SJ</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
</person-group>
<article-title>Multisensory integration during motor planning</article-title>
<source>The Journal of neuroscience : the official journal of the Society for Neuroscience</source>
<volume>23</volume>
<fpage>6982</fpage>
<lpage>6992</lpage>
<year>2003</year>
<pub-id pub-id-type="pmid">12904459</pub-id>
</element-citation>
</ref>
<ref id="R2">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sober</surname>
<given-names>SJ</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
</person-group>
<article-title>Flexible strategies for sensory integration during motor planning</article-title>
<source>Nature neuroscience</source>
<volume>8</volume>
<fpage>490</fpage>
<lpage>497</lpage>
<pub-id pub-id-type="doi">10.1038/nn1427</pub-id>
<year>2005</year>
<pub-id pub-id-type="pmid">15793578</pub-id>
</element-citation>
</ref>
<ref id="R3">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Beers</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Sittig</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Gon</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<article-title>Integration of proprioceptive and visual position–information: An experimentally supported model</article-title>
<source>J Neurophysiol</source>
<volume>81</volume>
<fpage>1355</fpage>
<lpage>1364</lpage>
<year>1999</year>
<pub-id pub-id-type="pmid">10085361</pub-id>
</element-citation>
</ref>
<ref id="R4">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<year>2002</year>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="R5">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morgan</surname>
<given-names>ML</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<article-title>Multisensory integration in macaque visual cortex depends on cue reliability</article-title>
<source>Neuron</source>
<volume>59</volume>
<fpage>662</fpage>
<lpage>673</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2008.06.024</pub-id>
<year>2008</year>
<pub-id pub-id-type="pmid">18760701</pub-id>
</element-citation>
</ref>
<ref id="R6">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McGuire</surname>
<given-names>LM</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
</person-group>
<article-title>Sensory transformations and the use of multiple reference frames for reach planning</article-title>
<source>Nature neuroscience</source>
<volume>12</volume>
<fpage>1056</fpage>
<lpage>1061</lpage>
<pub-id pub-id-type="doi">10.1038/nn.2357</pub-id>
<year>2009</year>
<pub-id pub-id-type="pmid">19597495</pub-id>
</element-citation>
</ref>
<ref id="R7">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sainburg</surname>
<given-names>RL</given-names>
</name>
<name>
<surname>Poizner</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Ghez</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Loss of proprioception produces deficits in interjoint coordination</article-title>
<source>Journal of Neurophysiology</source>
<volume>70</volume>
<fpage>2136</fpage>
<lpage>2147</lpage>
<year>1993</year>
<pub-id pub-id-type="pmid">8294975</pub-id>
</element-citation>
</ref>
<ref id="R8">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sainburg</surname>
<given-names>RL</given-names>
</name>
<name>
<surname>Ghilardi</surname>
<given-names>MF</given-names>
</name>
<name>
<surname>Poizner</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Ghez</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Control of limb dynamics in normal subjects and patients without proprioception</article-title>
<source>Journal of neurophysiology</source>
<volume>73</volume>
<fpage>820</fpage>
<lpage>835</lpage>
<year>1995</year>
<pub-id pub-id-type="pmid">7760137</pub-id>
</element-citation>
</ref>
<ref id="R9">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Suminski</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Tkach</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Fagg</surname>
<given-names>AH</given-names>
</name>
<name>
<surname>Hatsopoulos</surname>
<given-names>NG</given-names>
</name>
</person-group>
<article-title>Incorporating feedback from multiple sensory modalities enhances brain–machine interface control</article-title>
<source>J Neurosci</source>
<volume>30</volume>
<fpage>16777</fpage>
<lpage>16787</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3967-10.2010</pub-id>
<year>2010</year>
<pub-id pub-id-type="pmid">21159949</pub-id>
</element-citation>
</ref>
<ref id="R10">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fagg</surname>
<given-names>AH</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Biomimetic brain machine interfaces for the control of movement</article-title>
<source>J Neurosci</source>
<volume>27</volume>
<fpage>11842</fpage>
<lpage>11846</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3516-07.2007</pub-id>
<year>2007</year>
<pub-id pub-id-type="pmid">17978021</pub-id>
</element-citation>
</ref>
<ref id="R11">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Choi</surname>
<given-names>JS</given-names>
</name>
<name>
<surname>DiStasio</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Brockmeier</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Francis</surname>
<given-names>JT</given-names>
</name>
</person-group>
<article-title>An electric field model for prediction of somatosensory (S1) cortical field potentials induced by ventral posterior lateral (VPL) thalamic microstimulation</article-title>
<source>IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society</source>
<volume>20</volume>
<fpage>161</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="doi">10.1109/TNSRE.2011.2181417</pub-id>
<year>2012</year>
</element-citation>
</ref>
<ref id="R12">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Daly</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Aghagolzadeh</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Oweiss</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Optimal space–time precoding of artificial sensory feedback through mutichannel microstimulation in bi–directional brain–machine interfaces</article-title>
<source>Journal of neural engineering</source>
<volume>9</volume>
<fpage>065004</fpage>
<lpage>065004</lpage>
<pub-id pub-id-type="doi">10.1088/1741-2560/9/6/065004</pub-id>
<year>2012</year>
<pub-id pub-id-type="pmid">23187009</pub-id>
</element-citation>
</ref>
<ref id="R13">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weber</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Friesen</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Miller</surname>
<given-names>LE</given-names>
</name>
</person-group>
<article-title>Interfacing the somatosensory system to restore touch and proprioception: essential considerations</article-title>
<source>Journal of motor behavior</source>
<volume>44</volume>
<fpage>403</fpage>
<lpage>418</lpage>
<pub-id pub-id-type="doi">10.1080/00222895.2012.735283</pub-id>
<year>2012</year>
<pub-id pub-id-type="pmid">23237464</pub-id>
</element-citation>
</ref>
<ref id="R14">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tabot</surname>
<given-names>GA</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Restoring the sense of touch with a prosthetic hand through a brain interface</article-title>
<source>Proceedings of the National Academy of Sciences of the United States of America</source>
<volume>110</volume>
<fpage>18279</fpage>
<lpage>18284</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.1221113110</pub-id>
<year>2013</year>
<pub-id pub-id-type="pmid">24127595</pub-id>
</element-citation>
</ref>
<ref id="R15">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Held</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Hein</surname>
<given-names>a</given-names>
</name>
</person-group>
<article-title>Movement–Produced Stimulation in the Development of Visually Guided Behavior</article-title>
<source>Journal of comparative and physiological psychology</source>
<volume>56</volume>
<fpage>872</fpage>
<lpage>876</lpage>
<year>1963</year>
<pub-id pub-id-type="pmid">14050177</pub-id>
</element-citation>
</ref>
<ref id="R16">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Rowland</surname>
<given-names>Ba</given-names>
</name>
<name>
<surname>Stanford</surname>
<given-names>TR</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
</person-group>
<article-title>Incorporating cross–modal statistics in the development and maintenance of multisensory integration</article-title>
<source>The Journal of neuroscience : the official journal of the Society for Neuroscience</source>
<volume>32</volume>
<fpage>2287</fpage>
<lpage>2298</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4304-11.2012</pub-id>
<year>2012</year>
<pub-id pub-id-type="pmid">22396404</pub-id>
</element-citation>
</ref>
<ref id="R17">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burge</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>The statistical determinants of adaptation rate in human reaching</article-title>
<volume>8</volume>
<fpage>1</fpage>
<lpage>19</lpage>
<pub-id pub-id-type="doi">10.1167/8.4.20.Introduction</pub-id>
<year>2008</year>
</element-citation>
</ref>
<ref id="R18">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Simani</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>McGuire</surname>
<given-names>LM</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
</person-group>
<article-title>Visual–shift adaptation is composed of separable sensory and task–dependent effects</article-title>
<source>J Neurophysiol</source>
<volume>98</volume>
<fpage>2827</fpage>
<lpage>2841</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00290.2007</pub-id>
<year>2007</year>
<pub-id pub-id-type="pmid">17728389</pub-id>
</element-citation>
</ref>
<ref id="R19">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zaidel</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>AH</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<article-title>Multisensory calibration is independent of cue reliability</article-title>
<source>J Neurosci</source>
<volume>31</volume>
<fpage>13949</fpage>
<lpage>13962</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2732-11.2011</pub-id>
<year>2011</year>
<pub-id pub-id-type="pmid">21957256</pub-id>
</element-citation>
</ref>
<ref id="R20">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Makin</surname>
<given-names>JG</given-names>
</name>
<name>
<surname>Fellows</surname>
<given-names>MR</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
</person-group>
<article-title>Learning multisensory integration and coordinate transformation via density estimation</article-title>
<source>PLoS computational biology</source>
<volume>9</volume>
<fpage>e1003035</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pcbi.1003035</pub-id>
<year>2013</year>
<pub-id pub-id-type="pmid">23637588</pub-id>
</element-citation>
</ref>
<ref id="R21">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kalaska</surname>
<given-names>JF</given-names>
</name>
</person-group>
<article-title>The representation of arm movements in postcentral and parietal cortex</article-title>
<source>Canadian journal of physiology and pharmacology</source>
<volume>66</volume>
<fpage>455</fpage>
<lpage>463</lpage>
<year>1988</year>
<pub-id pub-id-type="pmid">3048613</pub-id>
</element-citation>
</ref>
<ref id="R22">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kalaska</surname>
<given-names>JF</given-names>
</name>
</person-group>
<article-title>Parietal cortex area 5 and visuomotor behavior</article-title>
<source>Can J Physiol Pharmacol</source>
<volume>74</volume>
<fpage>483</fpage>
<lpage>498</lpage>
<year>1996</year>
<pub-id pub-id-type="pmid">8828894</pub-id>
</element-citation>
</ref>
<ref id="R23">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Block</surname>
<given-names>HJ</given-names>
</name>
<name>
<surname>Bastian</surname>
<given-names>AJ</given-names>
</name>
</person-group>
<article-title>Sensory reweighting in targeted reaching: effects of conscious effort, error history, and target salience</article-title>
<source>J Neurophysiol</source>
<volume>103</volume>
<fpage>206</fpage>
<lpage>217</lpage>
<pub-id pub-id-type="doi">10.1152/jn.90961.2008</pub-id>
<year>2010</year>
<pub-id pub-id-type="pmid">19846617</pub-id>
</element-citation>
</ref>
<ref id="R24">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cheng</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
</person-group>
<article-title>Calibration of visually–guided reaching is driven by error corrective learning and internal dynamics</article-title>
<source>Journal of neurophysiology</source>
<volume>97</volume>
<fpage>3057</fpage>
<lpage>3069</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00897.2006.</pub-id>
<year>2007</year>
<pub-id pub-id-type="pmid">17202230</pub-id>
</element-citation>
</ref>
<ref id="R25">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Izawa</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Shadmehr</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Learning from sensory and reward prediction errors during motor adaptation</article-title>
<source>PLoS computational biology</source>
<volume>7</volume>
<fpage>e1002012</fpage>
<lpage>e1002012</lpage>
<pub-id pub-id-type="doi">10.1371/journal.pcbi.1002012</pub-id>
<year>2011</year>
<pub-id pub-id-type="pmid">21423711</pub-id>
</element-citation>
</ref>
<ref id="R26">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kalaska</surname>
<given-names>JF</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Prud’homme</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Hyde</surname>
<given-names>ML</given-names>
</name>
</person-group>
<article-title>Parietal area 5 neuronal activity encodes movement kinematics, not movement dynamics</article-title>
<source>Experimental brain research</source>
<volume>80</volume>
<fpage>351</fpage>
<lpage>364</lpage>
<year>1990</year>
<pub-id pub-id-type="pmid">2113482</pub-id>
</element-citation>
</ref>
<ref id="R27">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Batista</surname>
<given-names>AP</given-names>
</name>
<name>
<surname>Buneo</surname>
<given-names>CA</given-names>
</name>
<name>
<surname>Snyder</surname>
<given-names>LH</given-names>
</name>
<name>
<surname>Andersen</surname>
<given-names>RA</given-names>
</name>
</person-group>
<article-title>Reach plans in eye–centered coordinates</article-title>
<source>Science (New York, NY)</source>
<volume>285</volume>
<fpage>257</fpage>
<lpage>260</lpage>
<year>1999</year>
</element-citation>
</ref>
<ref id="R28">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Battaglia–Mayer</surname>
<given-names>A</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Early coding of reaching in the parietooccipital cortex</article-title>
<source>J Neurophysiol</source>
<volume>83</volume>
<fpage>2374</fpage>
<lpage>2391</lpage>
<year>2000</year>
<pub-id pub-id-type="pmid">10758140</pub-id>
</element-citation>
</ref>
<ref id="R29">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Graziano</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Cooke</surname>
<given-names>DF</given-names>
</name>
<name>
<surname>Taylor</surname>
<given-names>CS</given-names>
</name>
</person-group>
<article-title>Coding the location of the arm by sight</article-title>
<source>Science (New York, NY)</source>
<volume>290</volume>
<fpage>1782</fpage>
<lpage>1786</lpage>
<year>2000</year>
</element-citation>
</ref>
<ref id="R30">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bremner</surname>
<given-names>LR</given-names>
</name>
<name>
<surname>Andersen</surname>
<given-names>RA</given-names>
</name>
</person-group>
<article-title>Coding of the reach vector in parietal area 5d</article-title>
<source>Neuron</source>
<volume>75</volume>
<fpage>342</fpage>
<lpage>351</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2012.03.041</pub-id>
<year>2012</year>
<pub-id pub-id-type="pmid">22841318</pub-id>
</element-citation>
</ref>
<ref id="R31">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Deneve</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Efficient computation and cue integration with noisy population codes</article-title>
<source>Nature neuroscience</source>
<volume>4</volume>
<fpage>826</fpage>
<lpage>831</lpage>
<pub-id pub-id-type="doi">10.1038/90541</pub-id>
<year>2001</year>
<pub-id pub-id-type="pmid">11477429</pub-id>
</element-citation>
</ref>
<ref id="R32">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Bayesian inference with probabilistic population codes</article-title>
<source>Nature neuroscience</source>
<volume>9</volume>
<fpage>1432</fpage>
<lpage>1438</lpage>
<pub-id pub-id-type="doi">10.1038/nn1790</pub-id>
<year>2006</year>
<pub-id pub-id-type="pmid">17057707</pub-id>
</element-citation>
</ref>
<ref id="R33">
<label>33</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chang</surname>
<given-names>SW</given-names>
</name>
<name>
<surname>Snyder</surname>
<given-names>LH</given-names>
</name>
</person-group>
<article-title>Idiosyncratic and systematic aspects of spatial representations in the macaque parietal cortex</article-title>
<source>Proceedings of the National Academy of Sciences of the United States of America</source>
<volume>107</volume>
<fpage>7951</fpage>
<lpage>7956</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.0913209107</pub-id>
<year>2010</year>
<pub-id pub-id-type="pmid">20375282</pub-id>
</element-citation>
</ref>
<ref id="R34">
<label>34</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marzocchi</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Breveglieri</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Galletti</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Fattori</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Reaching activity in parietal area V6A of macaque: eye influence on arm activity or retinocentric coding of reaching movements?</article-title>
<source>The European journal of neuroscience</source>
<volume>27</volume>
<fpage>775</fpage>
<lpage>789</lpage>
<pub-id pub-id-type="doi">10.1111/j.1460-9568.2008.06021.x</pub-id>
<year>2008</year>
<pub-id pub-id-type="pmid">18279330</pub-id>
</element-citation>
</ref>
<ref id="R35">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McGuire</surname>
<given-names>LM</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
</person-group>
<article-title>Heterogeneous representations in the superior parietal lobule are common across reaches to visual and proprioceptive targets</article-title>
<source>J Neurosci</source>
<volume>31</volume>
<fpage>6661</fpage>
<lpage>6673</lpage>
<pub-id pub-id-type="doi">10.1523/jneurosci.2921-10.2011</pub-id>
<year>2011</year>
<pub-id pub-id-type="pmid">21543595</pub-id>
</element-citation>
</ref>
<ref id="R36">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wise</surname>
<given-names>SP</given-names>
</name>
<name>
<surname>Boussaoud</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Johnson</surname>
<given-names>PB</given-names>
</name>
<name>
<surname>Caminiti</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Premotor and parietal cortex: corticocortical connectivity and combinatorial computations</article-title>
<source>Annu Rev Neurosci</source>
<volume>20</volume>
<fpage>25</fpage>
<lpage>42</lpage>
<year>1997</year>
<pub-id pub-id-type="pmid">9056706</pub-id>
</element-citation>
</ref>
<ref id="R37">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Andersen</surname>
<given-names>RA</given-names>
</name>
<name>
<surname>Buneo</surname>
<given-names>CA</given-names>
</name>
</person-group>
<article-title>Intentional maps in posterior parietal cortex</article-title>
<source>Annu Rev Neurosci</source>
<volume>25</volume>
<fpage>189</fpage>
<lpage>220</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.neuro.25.112701.142922</pub-id>
<year>2002</year>
<pub-id pub-id-type="pmid">12052908</pub-id>
</element-citation>
</ref>
<ref id="R38">
<label>38</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Battaglia–Mayer</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Caminiti</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Lacquaniti</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Zago</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Multiple levels of representation of reaching in the parieto–frontal network</article-title>
<source>Cereb Cortex</source>
<volume>13</volume>
<fpage>1009</fpage>
<lpage>1022</lpage>
<year>2003</year>
<pub-id pub-id-type="pmid">12967918</pub-id>
</element-citation>
</ref>
<ref id="R39">
<label>39</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sabes</surname>
<given-names>PN</given-names>
</name>
</person-group>
<article-title>Sensory integration for reaching: models of optimality in the context of behavior and the underlying neural circuits</article-title>
<source>Prog Brain Res</source>
<volume>191</volume>
<fpage>195</fpage>
<lpage>209</lpage>
<pub-id pub-id-type="doi">10.1016/B978-0-444-53752-2.00004-7</pub-id>
<year>2011</year>
<pub-id pub-id-type="pmid">21741553</pub-id>
</element-citation>
</ref>
<ref id="R40">
<label>40</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yttri</surname>
<given-names>EA</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Snyder</surname>
<given-names>LH</given-names>
</name>
</person-group>
<article-title>Lesions of cortical area LIP affect reach onset only when the reach is accompanied by a saccade, revealing an active eye–hand coordination circuit</article-title>
<source>Proceedings of the National Academy of Sciences of the United States of America</source>
<volume>110</volume>
<fpage>2371</fpage>
<lpage>2376</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.1220508110</pub-id>
<year>2013</year>
<pub-id pub-id-type="pmid">23341626</pub-id>
</element-citation>
</ref>
<ref id="R41">
<label>41</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levy</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Schluppeck</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Heeger</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Glimcher</surname>
<given-names>PW</given-names>
</name>
</person-group>
<article-title>Specificity of human cortical areas for reaches and saccades</article-title>
<source>J Neurosci</source>
<volume>27</volume>
<fpage>4687</fpage>
<lpage>4696</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0459-07.2007</pub-id>
<year>2007</year>
<pub-id pub-id-type="pmid">17460081</pub-id>
</element-citation>
</ref>
<ref id="R42">
<label>42</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pearson</surname>
<given-names>RC</given-names>
</name>
<name>
<surname>Powell</surname>
<given-names>TP</given-names>
</name>
</person-group>
<article-title>The cortico–cortical connections to area 5 of the parietal lobe from the primary somatic sensory cortex of the monkey</article-title>
<source>Proceedings of the Royal Society of London. Series B, Containing papers of a Biological character Royal Society</source>
<volume>200</volume>
<fpage>103</fpage>
<lpage>108</lpage>
<year>1978</year>
</element-citation>
</ref>
<ref id="R43">
<label>43</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>JW</given-names>
</name>
<name>
<surname>Van Essen</surname>
<given-names>DC</given-names>
</name>
</person-group>
<article-title>Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey</article-title>
<source>The Journal of comparative neurology</source>
<volume>428</volume>
<fpage>112</fpage>
<lpage>137</lpage>
<year>2000</year>
<pub-id pub-id-type="pmid">11058227</pub-id>
</element-citation>
</ref>
<ref id="R44">
<label>44</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Redding</surname>
<given-names>GM</given-names>
</name>
<name>
<surname>Wallace</surname>
<given-names>B</given-names>
</name>
</person-group>
<article-title>Strategic calibration and spatial alignment: a model from prism adaptation</article-title>
<source>Journal of motor behavior</source>
<volume>34</volume>
<fpage>126</fpage>
<lpage>138</lpage>
<pub-id pub-id-type="doi">10.1080/00222890209601935</pub-id>
<year>2002</year>
<pub-id pub-id-type="pmid">12057886</pub-id>
</element-citation>
</ref>
<ref id="R45">
<label>45</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Beers</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Sittig</surname>
<given-names>aC</given-names>
</name>
<name>
<surname>Denier van der Gon</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<article-title>How humans combine simultaneous proprioceptive and visual position information</article-title>
<source>Experimental brain research. Experimentelle Hirnforschung.Expérimentation cérébrale</source>
<volume>111</volume>
<fpage>253</fpage>
<lpage>261</lpage>
<year>1996</year>
<pub-id pub-id-type="pmid">8891655</pub-id>
</element-citation>
</ref>
<ref id="R46">
<label>46</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ghez</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Sainburg</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Proprioceptive control of interjoint coordination</article-title>
<source>Can J Physiol Pharmacol</source>
<volume>73</volume>
<fpage>273</fpage>
<lpage>284</lpage>
<year>1995</year>
<pub-id pub-id-type="pmid">7621366</pub-id>
</element-citation>
</ref>
<ref id="R47">
<label>47</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Novak</surname>
<given-names>KE</given-names>
</name>
<name>
<surname>Miller</surname>
<given-names>LE</given-names>
</name>
<name>
<surname>Houk</surname>
<given-names>JC</given-names>
</name>
</person-group>
<article-title>The use of overlapping submovements in the control of rapid hand movements</article-title>
<source>Experimental brain research</source>
<volume>144</volume>
<fpage>351</fpage>
<lpage>364</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-002-1060-6</pub-id>
<year>2002</year>
<pub-id pub-id-type="pmid">12021817</pub-id>
</element-citation>
</ref>
<ref id="R48">
<label>48</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Efron</surname>
<given-names>BT</given-names>
</name>
<name>
<surname>RJ</surname>
</name>
</person-group>
<source>An Introduction to the Bootstrap</source>
<publisher-name>Chapman & Hall</publisher-name>
<year>1993</year>
</element-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="F1" orientation="portrait" position="float">
<label>Figure 1</label>
<caption>
<p>Behavioral task and sensory feedback. (
<bold>a</bold>
) Timeline of a behavioral trial (see Online Methods for details). (
<bold>b</bold>
) Visual feedback of the instantaneous movement vector (black arrow) takes the form of a random moving–dot flow–field (“dot–field”). The coherence of the dot–field—the percentage of dots moving in the same direction—determines its reliability. (
<bold>c</bold>
) Implantation site of stimulating electrode arrays for monkeys D (black) and F (blue). CS–central sulcus; IPS–inferior parietal sulcus. Right: the assigned PD of each stimulating electrode is overlaid on its location within the array. (
<bold>d</bold>
) An example ICMS trial showing the movement vector at the beginning of the reach (black arrow) and the monkey’s subsequent movement path (blue). At right: ICMS patterns delivered during the trial; each row represents the time–varying stimulation pattern of the electrode with the preferred direction (PD) indicated at left (black arrow). Vermillion tick marks denote biphasic stimulation pulses, which are shown subsampled for clarity. (
<bold>e</bold>
) Inset: the instantaneous movement vectors encoded at two time–points during the reach are shown as solid and dashed black arrows. Below, the pattern of stimulation encoding each movement vector is shown across electrodes; arrowheads indicate the PD of each electrode.</p>
</caption>
<graphic xlink:href="nihms639101f1"></graphic>
</fig>
<fig id="F2" orientation="portrait" position="float">
<label>Figure 2</label>
<caption>
<p>Comparison of task performance across sensory feedback conditions. Behavioral performance measures are averaged across the last seven testing sessions for each monkey, shown for each sensory feedback type and as a function of visual coherence (for VIS and VIS+ICMS trials). Error bars denote bootstrapped standard error of the mean. The ICMS data points, which are independent of visual coherences, are extended across the plot to aid visual comparison. (
<bold>a</bold>
) number of movement sub–segments; (
<bold>b</bold>
) movement path length, normalized by the initial distance to the reach target; (
<bold>c</bold>
) movement time, normalized as in
<bold>b</bold>
. See online methods for a detailed description of task performance measures.</p>
</caption>
<graphic xlink:href="nihms639101f2"></graphic>
</fig>
<fig id="F3" orientation="portrait" position="float">
<label>Figure 3</label>
<caption>
<p>Evolution of performance over training (Monkey F). Behavioral performance measures are shown as a function of the cumulative number of VIS+ICMS trials performed (training and testing). The data, collected during testing sessions, were smoothed for clarity (Gaussian window with standard deviation of 2.8 training sessions, translating to approximately 2,800 training trials for Monkey F). The visual coherence on training trials was decreased across training sessions (indicated by gray bars at the bottom of the figure and vertical gray lines at the transitions). The left, thin green line denotes the onset of ICMS–only trials, where target sizes were temporarily larger than in the other trial conditions; the right, thick green line denotes the beginning of ICMS–trials with targets of standard size. (
<bold>a</bold>
) percent correct trials; (
<bold>b</bold>
) number of movement segments measured online error corrections; (
<bold>c</bold>
) movement time for the trial is normalized by the initial distance to the reach target; (
<bold>d</bold>
) path length, normalized as in
<bold>c</bold>
. See
<xref rid="SD2" ref-type="supplementary-material">Supplementary Table 2</xref>
for additional details on the training and testing schedule.</p>
</caption>
<graphic xlink:href="nihms639101f3"></graphic>
</fig>
<fig id="F4" orientation="portrait" position="float">
<label>Figure 4</label>
<caption>
<p>Monkeys estimate both target distance and direction from sensory feedback. Vermillion points reflect performance with ICMS and purple points reflect feedback with VIS. Black solid lines are unity and the thick colored lines represent linear fits between the movement and target variable. Fits were performed separately for the distal (target angle [0:π]) and proximal (target angle [−π:0]) halves of the workspace. (
<bold>a</bold>
,
<bold>b</bold>
) Initial movement angle versus target angle for monkey D and F, respectively. (
<bold>c</bold>
,
<bold>d</bold>
) Initial movement distance versus target distance for monkey D and F, respectively. Region within the dashed black lines falls within the diameter of the target.</p>
</caption>
<graphic xlink:href="nihms639101f4"></graphic>
</fig>
<fig id="F5" orientation="portrait" position="float">
<label>Figure 5</label>
<caption>
<p>Integration of vision and ICMS minimizes reach variance. (
<bold>a</bold>
) Standard deviation of initial angle relative to target angle as a function of visual coherence for different feedback conditions for each monkey. Standard deviation was calculated after subtracting a smoothed estimate of mean initial angle (panel b); results were qualitative unchanged with only the target angle subtracted (i.e., angle computed with respect to straight–line reach;
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 5</xref>
). Error bars represent standard error of the mean. Dashed black lines indicate model predictions with no motor noise (Online Methods,
<xref rid="FD3" ref-type="disp-formula">Eqn. 3a</xref>
); dotted black lines indicate model predictions with maximal motor noise (Online Methods,
<xref rid="FD4" ref-type="disp-formula">Eqn. 3b</xref>
). (
<bold>b</bold>
) Mean initial angle, with respect to a straight–line reach. Smoothed values are shown on a polar plot as a function of target direction. Data is from Monkey D; for Monkey F, see
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 5b</xref>
. (
<bold>c</bold>
) Visual cue weighting (see Online Methods) for combined VIS+ICMS conditions were closer to zero (ICMS) for low coherence trials and closer to one (VIS) for high coherence trials. Blue filled circles: visual cue weighting estimated from data (Online Methods, Eqn. 6); black unfilled circles: minimum variance model prediction (Online Methods,
<xref rid="FD7" ref-type="disp-formula">Eqn. 5</xref>
). Data is from Monkey D; for Monkey F, see
<xref rid="SD1" ref-type="supplementary-material">Supplementary Figure 5c</xref>
.</p>
</caption>
<graphic xlink:href="nihms639101f5"></graphic>
</fig>
<fig id="F6" orientation="portrait" position="float">
<label>Figure 6</label>
<caption>
<p>Directed error correction. (a) Angle of the second sub–movement as a function of instantaneous movement vector angle in trials that required error correction. Top: ICMS–only trials; Bottom: VIS–only trials at high visual coherence (100% for Monkey D, 50% for Monkey F). Black line: unity. (
<bold>b</bold>
) Error variance (rad
<sup>2</sup>
) in sub–movement angle estimation for ICMS and VIS. Dashed vermillion line denotes chance (random, undirected movement). Error bars represent standard error of the mean.</p>
</caption>
<graphic xlink:href="nihms639101f6"></graphic>
</fig>
</floats-group>
</pmc>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
<region>
<li>Californie</li>
</region>
<settlement>
<li>San Francisco</li>
</settlement>
</list>
<tree>
<noCountry>
<name sortKey="O Oherty, Joseph E" sort="O Oherty, Joseph E" uniqKey="O Oherty J" first="Joseph E." last="O Oherty">Joseph E. O Oherty</name>
</noCountry>
<country name="États-Unis">
<region name="Californie">
<name sortKey="Dadarlat, Maria C" sort="Dadarlat, Maria C" uniqKey="Dadarlat M" first="Maria C." last="Dadarlat">Maria C. Dadarlat</name>
</region>
<name sortKey="Sabes, Philip N" sort="Sabes, Philip N" uniqKey="Sabes P" first="Philip N." last="Sabes">Philip N. Sabes</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003471 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 003471 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:4282864
   |texte=   A learning–based approach to artificial sensory feedback leads to optimal integration
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:25420067" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024