Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli

Identifieur interne : 000509 ( Pmc/Curation ); précédent : 000508; suivant : 000510

Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli

Auteurs : Neeraj Kumar [Inde] ; Pratik K. Mutha [Inde]

Source :

RBID : PMC:4808085

Abstract

The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions.


Url:
DOI: 10.1152/jn.00850.2015
PubMed: 26823516
PubMed Central: 4808085

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4808085

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli</title>
<author>
<name sortKey="Kumar, Neeraj" sort="Kumar, Neeraj" uniqKey="Kumar N" first="Neeraj" last="Kumar">Neeraj Kumar</name>
<affiliation wicri:level="1">
<nlm:aff wicri:cut="; and" id="aff1">Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India</nlm:aff>
<country xml:lang="fr">Inde</country>
<wicri:regionArea>Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Mutha, Pratik K" sort="Mutha, Pratik K" uniqKey="Mutha P" first="Pratik K." last="Mutha">Pratik K. Mutha</name>
<affiliation wicri:level="1">
<nlm:aff wicri:cut="; and" id="aff1">Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India</nlm:aff>
<country xml:lang="fr">Inde</country>
<wicri:regionArea>Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">Department of Biological Engineering, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India</nlm:aff>
<country xml:lang="fr">Inde</country>
<wicri:regionArea>Department of Biological Engineering, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26823516</idno>
<idno type="pmc">4808085</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4808085</idno>
<idno type="RBID">PMC:4808085</idno>
<idno type="doi">10.1152/jn.00850.2015</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000509</idno>
<idno type="wicri:Area/Pmc/Curation">000509</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli</title>
<author>
<name sortKey="Kumar, Neeraj" sort="Kumar, Neeraj" uniqKey="Kumar N" first="Neeraj" last="Kumar">Neeraj Kumar</name>
<affiliation wicri:level="1">
<nlm:aff wicri:cut="; and" id="aff1">Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India</nlm:aff>
<country xml:lang="fr">Inde</country>
<wicri:regionArea>Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Mutha, Pratik K" sort="Mutha, Pratik K" uniqKey="Mutha P" first="Pratik K." last="Mutha">Pratik K. Mutha</name>
<affiliation wicri:level="1">
<nlm:aff wicri:cut="; and" id="aff1">Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India</nlm:aff>
<country xml:lang="fr">Inde</country>
<wicri:regionArea>Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">Department of Biological Engineering, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India</nlm:aff>
<country xml:lang="fr">Inde</country>
<wicri:regionArea>Department of Biological Engineering, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Journal of Neurophysiology</title>
<idno type="ISSN">0022-3077</idno>
<idno type="eISSN">1522-1598</idno>
<imprint>
<date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Abrams, Ra" uniqKey="Abrams R">RA Abrams</name>
</author>
<author>
<name sortKey="Davoli, Cc" uniqKey="Davoli C">CC Davoli</name>
</author>
<author>
<name sortKey="Du, F" uniqKey="Du F">F Du</name>
</author>
<author>
<name sortKey="Knapp, Wh" uniqKey="Knapp W">WH Knapp</name>
</author>
<author>
<name sortKey="Paull, D" uniqKey="Paull D">D Paull</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bar, M" uniqKey="Bar M">M Bar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bastian, Aj" uniqKey="Bastian A">AJ Bastian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blakemore, Sj" uniqKey="Blakemore S">SJ Blakemore</name>
</author>
<author>
<name sortKey="Frith, Cd" uniqKey="Frith C">CD Frith</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blakemore, Sj" uniqKey="Blakemore S">SJ Blakemore</name>
</author>
<author>
<name sortKey="Wolpert, D" uniqKey="Wolpert D">D Wolpert</name>
</author>
<author>
<name sortKey="Frith, C" uniqKey="Frith C">C Frith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brooks, Jx" uniqKey="Brooks J">JX Brooks</name>
</author>
<author>
<name sortKey="Cullen, Ke" uniqKey="Cullen K">KE Cullen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bubic, A" uniqKey="Bubic A">A Bubic</name>
</author>
<author>
<name sortKey="Von Cramon, Dy" uniqKey="Von Cramon D">DY von Cramon</name>
</author>
<author>
<name sortKey="Schubotz, Ri" uniqKey="Schubotz R">RI Schubotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Crowell, Ja" uniqKey="Crowell J">JA Crowell</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
<author>
<name sortKey="Shenoy, Kv" uniqKey="Shenoy K">KV Shenoy</name>
</author>
<author>
<name sortKey="Andersen, Ra" uniqKey="Andersen R">RA Andersen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cullen, Ke" uniqKey="Cullen K">KE Cullen</name>
</author>
<author>
<name sortKey="Brooks, Jx" uniqKey="Brooks J">JX Brooks</name>
</author>
<author>
<name sortKey="Jamali, M" uniqKey="Jamali M">M Jamali</name>
</author>
<author>
<name sortKey="Carriot, J" uniqKey="Carriot J">J Carriot</name>
</author>
<author>
<name sortKey="Massot, C" uniqKey="Massot C">C Massot</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Desantis, A" uniqKey="Desantis A">A Desantis</name>
</author>
<author>
<name sortKey="Roussel, C" uniqKey="Roussel C">C Roussel</name>
</author>
<author>
<name sortKey="Waszak, F" uniqKey="Waszak F">F Waszak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Girshick, Ar" uniqKey="Girshick A">AR Girshick</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Graziano, Ms" uniqKey="Graziano M">MS Graziano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Graziano, Ms" uniqKey="Graziano M">MS Graziano</name>
</author>
<author>
<name sortKey="Yap, Gs" uniqKey="Yap G">GS Yap</name>
</author>
<author>
<name sortKey="Gross, Cg" uniqKey="Gross C">CG Gross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haarmeier, T" uniqKey="Haarmeier T">T Haarmeier</name>
</author>
<author>
<name sortKey="Bunjes, F" uniqKey="Bunjes F">F Bunjes</name>
</author>
<author>
<name sortKey="Lindner, A" uniqKey="Lindner A">A Lindner</name>
</author>
<author>
<name sortKey="Berret, E" uniqKey="Berret E">E Berret</name>
</author>
<author>
<name sortKey="Thier, P" uniqKey="Thier P">P Thier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hommel, B" uniqKey="Hommel B">B Hommel</name>
</author>
<author>
<name sortKey="Musseler, J" uniqKey="Musseler J">J Musseler</name>
</author>
<author>
<name sortKey="Aschersleben, G" uniqKey="Aschersleben G">G Aschersleben</name>
</author>
<author>
<name sortKey="Prinz, W" uniqKey="Prinz W">W Prinz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Izawa, J" uniqKey="Izawa J">J Izawa</name>
</author>
<author>
<name sortKey="Criscimagna Hemminger, Se" uniqKey="Criscimagna Hemminger S">SE Criscimagna-Hemminger</name>
</author>
<author>
<name sortKey="Shadmehr, R" uniqKey="Shadmehr R">R Shadmehr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kok, P" uniqKey="Kok P">P Kok</name>
</author>
<author>
<name sortKey="Brouwer, Gj" uniqKey="Brouwer G">GJ Brouwer</name>
</author>
<author>
<name sortKey="Van Gerven, Ma" uniqKey="Van Gerven M">MA van Gerven</name>
</author>
<author>
<name sortKey="De Lange, Fp" uniqKey="De Lange F">FP de Lange</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kok, P" uniqKey="Kok P">P Kok</name>
</author>
<author>
<name sortKey="Jehee, Jf" uniqKey="Jehee J">JF Jehee</name>
</author>
<author>
<name sortKey="De Lange, Fp" uniqKey="De Lange F">FP de Lange</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Kording</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Krakauer, Jw" uniqKey="Krakauer J">JW Krakauer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kwon, Os" uniqKey="Kwon O">OS Kwon</name>
</author>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lindner, A" uniqKey="Lindner A">A Lindner</name>
</author>
<author>
<name sortKey="Schwarz, U" uniqKey="Schwarz U">U Schwarz</name>
</author>
<author>
<name sortKey="Ilg, Uj" uniqKey="Ilg U">UJ Ilg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miall, Rc" uniqKey="Miall R">RC Miall</name>
</author>
<author>
<name sortKey="Christensen, Lo" uniqKey="Christensen L">LO Christensen</name>
</author>
<author>
<name sortKey="Cain, O" uniqKey="Cain O">O Cain</name>
</author>
<author>
<name sortKey="Stanley, J" uniqKey="Stanley J">J Stanley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mutha, Pk" uniqKey="Mutha P">PK Mutha</name>
</author>
<author>
<name sortKey="Sainburg, Rl" uniqKey="Sainburg R">RL Sainburg</name>
</author>
<author>
<name sortKey="Haaland, Ky" uniqKey="Haaland K">KY Haaland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nagai, Y" uniqKey="Nagai Y">Y Nagai</name>
</author>
<author>
<name sortKey="Suzuki, M" uniqKey="Suzuki M">M Suzuki</name>
</author>
<author>
<name sortKey="Miyazaki, M" uniqKey="Miyazaki M">M Miyazaki</name>
</author>
<author>
<name sortKey="Kitazawa, S" uniqKey="Kitazawa S">S Kitazawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Phillips Silver, J" uniqKey="Phillips Silver J">J Phillips-Silver</name>
</author>
<author>
<name sortKey="Trainor, Lj" uniqKey="Trainor L">LJ Trainor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reed, Cl" uniqKey="Reed C">CL Reed</name>
</author>
<author>
<name sortKey="Betz, R" uniqKey="Betz R">R Betz</name>
</author>
<author>
<name sortKey="Garza, Jp" uniqKey="Garza J">JP Garza</name>
</author>
<author>
<name sortKey="Roberts, Rj" uniqKey="Roberts R">RJ Roberts</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reed, Cl" uniqKey="Reed C">CL Reed</name>
</author>
<author>
<name sortKey="Grubb, Jd" uniqKey="Grubb J">JD Grubb</name>
</author>
<author>
<name sortKey="Steele, C" uniqKey="Steele C">C Steele</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salomon, R" uniqKey="Salomon R">R Salomon</name>
</author>
<author>
<name sortKey="Szpiro Grinberg, S" uniqKey="Szpiro Grinberg S">S Szpiro-Grinberg</name>
</author>
<author>
<name sortKey="Lamy, D" uniqKey="Lamy D">D Lamy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schiffer, Am" uniqKey="Schiffer A">AM Schiffer</name>
</author>
<author>
<name sortKey="Waszak, F" uniqKey="Waszak F">F Waszak</name>
</author>
<author>
<name sortKey="Yeung, N" uniqKey="Yeung N">N Yeung</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shadmehr, R" uniqKey="Shadmehr R">R Shadmehr</name>
</author>
<author>
<name sortKey="Smith, Ma" uniqKey="Smith M">MA Smith</name>
</author>
<author>
<name sortKey="Krakauer, Jw" uniqKey="Krakauer J">JW Krakauer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sperry, Rw" uniqKey="Sperry R">RW Sperry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Steinbach, Mj" uniqKey="Steinbach M">MJ Steinbach</name>
</author>
<author>
<name sortKey="Held, R" uniqKey="Held R">R Held</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocker, Aa" uniqKey="Stocker A">AA Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Synofzik, M" uniqKey="Synofzik M">M Synofzik</name>
</author>
<author>
<name sortKey="Lindner, A" uniqKey="Lindner A">A Lindner</name>
</author>
<author>
<name sortKey="Thier, P" uniqKey="Thier P">P Thier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Synofzik, M" uniqKey="Synofzik M">M Synofzik</name>
</author>
<author>
<name sortKey="Thier, P" uniqKey="Thier P">P Thier</name>
</author>
<author>
<name sortKey="Lindner, A" uniqKey="Lindner A">A Lindner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Taylor, Ja" uniqKey="Taylor J">JA Taylor</name>
</author>
<author>
<name sortKey="Thoroughman, Ka" uniqKey="Thoroughman K">KA Thoroughman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, Rj" uniqKey="Van Beers R">RJ van Beers</name>
</author>
<author>
<name sortKey="Sittig, Ac" uniqKey="Sittig A">AC Sittig</name>
</author>
<author>
<name sortKey="Gon, Jj" uniqKey="Gon J">JJ Gon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vaziri, S" uniqKey="Vaziri S">S Vaziri</name>
</author>
<author>
<name sortKey="Diedrichsen, J" uniqKey="Diedrichsen J">J Diedrichsen</name>
</author>
<author>
<name sortKey="Shadmehr, R" uniqKey="Shadmehr R">R Shadmehr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vercher, Jl" uniqKey="Vercher J">JL Vercher</name>
</author>
<author>
<name sortKey="Quaccia, D" uniqKey="Quaccia D">D Quaccia</name>
</author>
<author>
<name sortKey="Gauthier, Gm" uniqKey="Gauthier G">GM Gauthier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Holst, E" uniqKey="Von Holst E">E von Holst</name>
</author>
<author>
<name sortKey="Mittelstaedt, H" uniqKey="Mittelstaedt H">H Mittelstaedt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Waszak, F" uniqKey="Waszak F">F Waszak</name>
</author>
<author>
<name sortKey="Cardoso Leite, P" uniqKey="Cardoso Leite P">P Cardoso-Leite</name>
</author>
<author>
<name sortKey="Hughes, G" uniqKey="Hughes G">G Hughes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
<author>
<name sortKey="Flanagan, Jr" uniqKey="Flanagan J">JR Flanagan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
<author>
<name sortKey="Miall, Rc" uniqKey="Miall R">RC Miall</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">J Neurophysiol</journal-id>
<journal-id journal-id-type="iso-abbrev">J. Neurophysiol</journal-id>
<journal-id journal-id-type="hwp">jn</journal-id>
<journal-id journal-id-type="pmc">jn</journal-id>
<journal-id journal-id-type="publisher-id">JN</journal-id>
<journal-title-group>
<journal-title>Journal of Neurophysiology</journal-title>
</journal-title-group>
<issn pub-type="ppub">0022-3077</issn>
<issn pub-type="epub">1522-1598</issn>
<publisher>
<publisher-name>American Physiological Society</publisher-name>
<publisher-loc>Bethesda, MD</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26823516</article-id>
<article-id pub-id-type="pmc">4808085</article-id>
<article-id pub-id-type="publisher-id">JN-00850-2015</article-id>
<article-id pub-id-type="doi">10.1152/jn.00850.2015</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Higher Neural Functions and Behavior</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Kumar</surname>
<given-names>Neeraj</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Mutha</surname>
<given-names>Pratik K.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<aff id="aff1">
<sup>1</sup>
Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India; and</aff>
<aff id="aff2">
<sup>2</sup>
Department of Biological Engineering, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India</aff>
</contrib-group>
<author-notes>
<corresp id="cor1">Address for reprint requests and other correspondence: P. K. Mutha,
<addr-line>Dept. of Biological Engineering, Indian Institute of Technology, Gandhinagar, Shed 5-214, VGEC Complex, Chandkheda, Ahmedabad 382424, India</addr-line>
(e-mail:
<email>pm@iitgn.ac.in</email>
).</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>28</day>
<month>1</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="ppub">
<day>1</day>
<month>3</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>28</day>
<month>1</month>
<year>2016</year>
</pub-date>
<pmc-comment> PMC Release delay is 0 months and 0 days and was based on the . </pmc-comment>
<volume>115</volume>
<issue>3</issue>
<fpage>1654</fpage>
<lpage>1663</lpage>
<history>
<date date-type="received">
<day>1</day>
<month>9</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>1</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2016 the American Physiological Society</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>American Physiological Society</copyright-holder>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/deed.en_US">
<license-p>Licensed under
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/deed.en_US">Creative Commons Attribution CC-BY 3.0</ext-link>
: © the American Physiological Society.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="z9k00316001654.pdf"></self-uri>
<abstract>
<p>The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions.</p>
</abstract>
<kwd-group>
<kwd>forward model</kwd>
<kwd>motor control</kwd>
<kwd>motor learning</kwd>
<kwd>perception</kwd>
<kwd>sensory predictions</kwd>
</kwd-group>
<funding-group>
<award-group id="award1">
<funding-source>
<named-content content-type="funder-id">501100001409</named-content>
Department of Science and Technology, Ministry of Science and Technology (DST)</funding-source>
</award-group>
<award-group id="award2">
<funding-source>Wellcome Trust - DBT India Alliance</funding-source>
</award-group>
</funding-group>
</article-meta>
</front>
<body>
<p>
<sc>it is widely believed that</sc>
when the brain generates motor commands to produce an action, it also predicts the sensory feedback that might be expected as a consequence of that action (
<xref rid="B2" ref-type="bibr">Bar 2007</xref>
;
<xref rid="B7" ref-type="bibr">Bubic et al. 2010</xref>
;
<xref rid="B17" ref-type="bibr">Hommel et al. 2001</xref>
;
<xref rid="B38" ref-type="bibr">Synofzik et al. 2006</xref>
;
<xref rid="B43" ref-type="bibr">von Holst and Mittelstaedt 1950</xref>
;
<xref rid="B44" ref-type="bibr">Waszak et al. 2012</xref>
). Such predicted action outcomes rely on prior knowledge of the properties of the body and the environment and are thought to be the output of an internal “forward model,” which uses a copy of the outgoing motor commands as its input (
<xref rid="B46" ref-type="bibr">Wolpert and Miall 1996</xref>
). Predicted action outcomes have been hypothesized to carry immense benefits for coordinated behavior (
<xref rid="B25" ref-type="bibr">Miall et al. 2007</xref>
;
<xref rid="B32" ref-type="bibr">Schiffer et al. 2015</xref>
;
<xref rid="B33" ref-type="bibr">Shadmehr et al. 2010</xref>
;
<xref rid="B45" ref-type="bibr">Wolpert and Flanagan 2001</xref>
). Fundamental among these is the notion that these signals are a source of rich sensory information that can be used in conjunction with actual sensory feedback to yield better perceptual estimates of the state of the body and the world. In other words, just as the integration of inputs from multiple sensory modalities, or multisensory integration, is thought to give rise to better perception (
<xref rid="B10" ref-type="bibr">DeAngelis and Angelaki 2012</xref>
;
<xref rid="B12" ref-type="bibr">Ernst and Banks 2002</xref>
;
<xref rid="B40" ref-type="bibr">van Beers et al. 1999</xref>
), it is postulated that the integration of predicted sensory outcomes with actual sensory feedback can yield perceptual estimates that are sharper than those possible by relying on either source alone. One plausible underpinning for this is that better perception arises because predicted action effects may reach the threshold of awareness faster, giving rise to a more detailed stimulus representation (
<xref rid="B20" ref-type="bibr">Kok et al. 2012</xref>
,
<xref rid="B19" ref-type="bibr">2013</xref>
). Nevertheless, compelling experimental evidence directly supporting this idea of enhanced perceptual estimates is unfortunately scarce.</p>
<p>A convincing case substantiating this notion can be made if it can be demonstrated that decisions about perceptual features extracted from sensory observations alone are less accurate than those derived by integrating predicted action outcomes and sensory feedback. However, when an action is made, the isolation of perceptual estimates derived solely from sensory feedback is challenging, because motor commands associated with that action will always yield sensory predictions (through the operation of the forward model), which can then contribute to those estimates (
<xref rid="B13" ref-type="bibr">Girshick et al. 2011</xref>
). Furthermore, for the same action, these “default” or natural action outcome predictions can vary across individuals depending on their prior knowledge and/or individual beliefs about the body and the environment, which can then lead to differences in perceptual decisions (
<xref rid="B21" ref-type="bibr">Kording and Wolpert 2004</xref>
). In such cases, what may appear to be a suboptimal decision may actually be optimal from an individual standpoint. Here, we sought to overcome these challenges and first assess whether decisions about perceptual attributes are indeed better when action outcome predictions and sensory feedback are aligned and can be integrated and conversely, worse when they are based on sensory feedback alone.</p>
<p>To do so, we combined a motor-learning paradigm with a visual perceptual task. The motor-learning task was critical, since it allowed us to artificially impose similar, stable sensory predictions across subjects (
<xref rid="B38" ref-type="bibr">Synofzik et al. 2006</xref>
), thereby over-riding their natural predictions, which as stated above, could theoretically result in very different perceptual estimates across individuals. Motor learning was achieved via a visuomotor adaptation paradigm, in which subjects learned to adapt their movements to distorted visual feedback of their hand motion. Specifically, this visual feedback was rotated by 10° relative to actual hand motion. Adaptation to this rotation resulted in an update of subjects' sensory predictions. In the perceptual task, subjects searched for and reported the color of a predefined visual target that moved on a screen among other moving distractors while they also moved their unseen hand. Crucially, target motion in this dynamic visual search task was such that it was either consistent or inconsistent with the subjects' predictions that had been updated via visuomotor adaptation. In other words, in this task, the target moved randomly, congruent with the actual hand motion (hereafter referred to as “hand aligned”) or along a direction that was rotated by 10° relative to hand motion (hereafter referred to as “rotated”). We reasoned that when visual feedback of target motion was rotated and therefore, consistent with the subjects' modified predictions, these two streams of information would be combined to yield a more accurate perceptual decision about target color. Conversely, if motion of the target was hand aligned and therefore, inconsistent with their modified predictions, then subjects would rely on visual feedback alone for their perceptual judgments, and the accuracy of the color report would be poorer. We tested this in our first experiment.</p>
<p>In our second experiment, we asked how the accuracy of perceptual feature extraction would be affected if the modified action outcome predictions were unreliable. Specifically, we examined whether subjects would continue to rely on their unreliable predictions, use sensory information alone, or transition to using different, more reliable predictions when making perceptual judgments. We made the novel prediction that in this case, subjects would adaptively switch to relying on their natural sensory predictions, available when performing any action, so that greater fidelity in their perceptual judgments is maintained. Finally, based on the findings of our first two experiments, we posited that if the predictions were not modified artificially at all, then subjects would simply rely on their natural predictions to make perceptual judgments, which would also be reflected in the accuracy of their color report. We tested this in our final experiment.</p>
<sec sec-type="methods" id="sec1">
<title>METHODS</title>
<sec id="sec1-1" sec-type="subjects">
<title>Subjects</title>
<p>A total of 36 young, healthy, right-handed individuals with normal or corrected-to-normal vision participated in the study. No subject reported any history of neurological disorders or orthopedic injuries. All subjects provided informed consent before participation. The study was approved by the Institute Ethics Committee of the Indian Institute of Technology Gandhinagar.</p>
</sec>
<sec id="sec1-2">
<title>Setup</title>
<p>The experimental setup was composed of a pseudo-virtual reality system, in which subjects sat facing a horizontally mounted computer screen that was positioned above a digitizing tablet. Subjects made planar movements by moving a hand-held stylus on the tablet. Direct visual feedback of the hand was not available, since it was blocked by the computer screen. Instead, visual feedback about stylus position (and thereby, hand position) was given by means of an on-screen cursor. The position of the cursor could either be veridical or distorted relative to hand motion. Start positions and randomly curved trajectories for tracing (for
<italic>experiments 1</italic>
and
<italic>2</italic>
) were also displayed on the computer screen (
<xref ref-type="fig" rid="F1">Fig. 1
<italic>A</italic>
</xref>
). The tracing task was not used in
<italic>experiment 3</italic>
. The same setup was used for the perceptual judgment task.</p>
<fig id="F1" orientation="portrait" position="float">
<label>Fig. 1.</label>
<caption>
<p>Experimental tasks.
<italic>A</italic>
: example trajectory to be traced during the visuomotor adaptation task. Subjects were asked to trace this trajectory (with the white circle as the start position) on a digitizing tablet using a hand-held stylus. Visual feedback was given by the means of a cursor.
<italic>B</italic>
: control reaching task. Subjects had to make a pointing movement (gray, dashed line) toward a given target (gray circle). No visual feedback was given.
<italic>C</italic>
: perceptual judgment task (modified visual search). Subjects were required to search for and identify the color [red (shown in
<italic>C</italic>
as gray) or green (shown in
<italic>C</italic>
as white)] of a target stimulus, defined before each trial by the position of the gap in its outline. In the example, the target is defined as the square with a gap on the right side.</p>
</caption>
<graphic xlink:href="z9k0041635630001"></graphic>
</fig>
</sec>
<sec id="sec1-3">
<title>Visuomotor Adaptation Task</title>
<p>Subjects were asked to trace a displayed trajectory from a defined start position located at one of its ends (
<xref ref-type="fig" rid="F1">Fig. 1
<italic>A</italic>
</xref>
). For each trial, the trajectory to be traced was randomly selected from a set of 30 curved trajectories that had been previously generated by the experimenter. This trajectory was shown for 20 s/trial. If subjects finished tracing the full path within 20 s, then the trajectory was automatically elongated. Thus subjects had to move for 20 s on each trial. Subjects received visual feedback about their hand position by means of the cursor, but cursor motion was rotated by 10° in the clockwise direction relative to hand motion. The magnitude of the cursor rotation remained fixed at 10° for every trial. Angular error between the hand and cursor positions was calculated every 1 s, and its mean value for each path was derived online. This direction error was computed as the angle between two lines: the line connecting the previous and the current locations of the cursor and the line connecting the corresponding perpendicular points on the displayed trajectory. Clockwise errors were considered positive. Subjects continued the trajectory-tracing task until the mean error between hand and cursor motion was smaller than 0.5°, averaged over the entire trial.</p>
</sec>
<sec id="sec1-4">
<title>Control Reaching Task</title>
<p>Upon exposure to the 10° rotation, subjects began modifying the direction of their hand motion and over a few trials, adapted to the rotation. To confirm that adaptation indeed occurred and that the subjects' predictions had been updated, we used a control task in which subjects were required to perform a simple point-to-point reaching movement. A white circle (diameter = 10°) was presented on the computer screen with a small circular start position located at its center (
<xref ref-type="fig" rid="F1">Fig. 1
<italic>B</italic>
</xref>
). A red pointing target (filled circular disc; diameter = 0.5°, shown in Fig. 1
<italic>B</italic>
as the gray circle) was presented in one of four different positions in the first quadrant of the larger white circle (at the 12-, 1-, 2-, or 3-o'clock positions). Upon receiving a go signal, subjects were asked to perform an out-and-back pointing movement from the start circle to the target. No visual feedback was provided on these trials. For these trials, we calculated movement error as the angle between the line connecting the start position to the reversal point of the actual movement and the line connecting the start position to the target.</p>
</sec>
<sec id="sec1-5">
<title>Perceptual Judgment Task</title>
<p>Subjects also performed a dynamic equivalent of a visual search task modeled after
<xref rid="B31" ref-type="bibr">Salomon et al. (2011)</xref>
. Square stimuli were displayed against a black background, inside an invisible black rectangle subtending 89 × 112 mm, centered at the middle of the computer monitor (
<xref ref-type="fig" rid="F1">Fig. 1
<italic>C</italic>
</xref>
). In each search display, six outlined squares were presented. Each square had sides measuring 6.4 mm with a 1-mm gap on any one of the four sides. Before each search trial, a single white square, similar to the squares appearing during the search trial, was shown at the center of the screen. This served as the target for the upcoming search trial. The remaining squares served as distractors. For the target and one other square, the side containing the 1-mm gap was unique. However, for each set of two of the remaining four squares in the search display, the side containing the 1-mm gap was identical. Each square was either light red (RGB = 145,140,125) or light green (RGB = 125,145,131). These values yielded colors that could not be distinguished easily. Each search display consisted of an equal number of green and red items. Subjects were asked to report the color of the target (red or green) during the search trial by pressing designated keys (“Ctrl” for red and “Alt” for green, counterbalanced across subjects) with their left hand as quickly and accurately as possible.</p>
<p>While performing this search task, subjects also moved the stylus with their right hand on the digitizing tablet throughout the trial. They were instructed that these right-arm movements must be continuous, cover all four quadrants of the display, be neither too fast nor too slow, and start with the presentation of the “start moving” instruction. The motion of one of the stimulus squares of the search display, which could be the target, followed the motion of the subject's hand in real time, whereas the motion of the other stimuli (distractors) followed a path that was a distorted version of the motion of the hand. This distorted path was generated by rotating the hand motion by a random value between 30° and 70°. On any given trial, this value was different for each stimulus. Although subjects were informed that their hand controlled the movement of one of the stimuli, they were also told that this item was not more likely than another to be the target, and so they should not try to look for it. To avoid the situation that the self-controlled stimulus was stationary and therefore, obvious among the rest of the moving stimuli at trial onset, each trial began with a start-moving written instruction that instructed the subjects to begin their movements (without any visual feedback) before the onset of the search display. After 1,200 ms, the search display appeared with all stimuli, including the self-controlled stimulus already in motion. Since the motion of the stimuli was either directly congruent with the hand or some rotationally transformed version of it, all stimuli moved at the same speed as the hand. The search display was presented until subjects made a response or until 6 s had elapsed. An individual square did not rotate during motion. Subjects performed a total of 192 search trials. The accuracy of the color report was recorded for further analysis.</p>
</sec>
<sec id="sec1-6">
<title>Tone-Discrimination Task</title>
<p>In
<italic>experiment 2</italic>
, subjects performed a tone-discrimination task as they adapted to the visuomotor rotation. This additional task was used to create fragility in the adaptation-induced update to the subjects' sensory predictions (
<xref rid="B39" ref-type="bibr">Taylor and Thoroughman 2007</xref>
). For this, subjects were first screened for their ability to discriminate the frequency of two tones. They performed 100 two-interval, two-alternative forced-choice frequency discriminations. The first tone was centered at 2,000 ± 100 Hz, whereas the frequency of the second tone was changed randomly between ±1 and ±150 Hz relative to the first tone. Both tones were presented for 100 ms at identical volumes with a gap that ranged from 150 to 450 ms. Subjects were instructed to determine quickly and accurately whether the second tone was of higher or lower pitch than the first tone. Subjects made their discrimination decisions by pressing one of two buttons (Ctrl or Alt) with their left hand that corresponded to higher or lower frequency. Each frequency change was repeated 10 times. After each discrimination, subjects were provided with feedback about the accuracy of their response. All tones were encoded to 16 bits at 16 kHz and played through headphones. Subjects were allowed to adjust the volume of the headphones to suit their comfort. Accuracy was recorded for each tone.</p>
<p>Immediately after the screening, subjects performed the frequency discrimination task without the adaptation task. In this task, the first tone was again centered at 2,000 ± 100 Hz. The specific change in frequency for the second tone was determined from each subject's performance on the prior screening task. The tone was such that it had resulted in 85–95% correct discriminations during screening. The time between two successive tones was set to 200 ms. These same conditions were integrated with the adaptation task to create the dual-task condition in
<italic>experiment 2</italic>
. Accuracy of correct discriminations was determined, and it was noted that accuracy of tone discrimination decreased during the dual-task (tone discrimination plus adaptation) condition compared with when discrimination was tested alone.</p>
</sec>
<sec id="sec1-7">
<title>Task Progression</title>
<sec id="sec1-7-1">
<title>Experiment 1.</title>
<p>In
<italic>experiment 1</italic>
, 12 subjects (mean age = 21.6 ± 0.5 yr; 8 men) first adapted to the 10° visuomotor rotation and then performed the visual search task. During the perceptual task, the square target, whose color had to be identified, moved in one of three different ways on any given trial: hand aligned, rotated 10° relative to the hand and random. Random target motion was generated by rotating the actual hand motion by a value drawn randomly from 30° to 70°. The target was equally likely to follow all three motion conditions. Subjects also performed the control-reaching task before and after the motor adaptation task to confirm that adaptation had indeed occurred.</p>
</sec>
<sec id="sec1-7-2">
<title>Experiment 2.</title>
<p>In
<italic>experiment 2</italic>
, 12 subjects (mean age = 21.8 ± 0.5 yr; 10 men) performed the same adaptation task, where they were required to adapt to the 10° cursor rotation. However, during adaptation, they also performed the tone-discrimination task. Following this dual task, subjects performed the dynamic visual search task. The conditions for this task remained the same
<italic>as experiment 1</italic>
. In addition, subjects performed the control-reaching task before and after the adaptation task.</p>
</sec>
<sec id="sec1-7-3">
<title>Experiment 3.</title>
<p>In
<italic>experiment 3</italic>
, 12 subjects (mean age = 22.4 ± 0.5 yr; 9 men) performed only the search task without the adaptation or the control reaching task. In the search task, only two target motion conditions were imposed: hand aligned and random.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="sec2">
<title>RESULTS</title>
<sec id="sec2-1">
<title>Experiment 1</title>
<p>In our first experiment, subjects initially adapted to a 10° visuomotor rotation while they traced with their unseen hand a randomly curved trajectory displayed on a screen for a number of trials, each lasting 20 s. When the rotation was first introduced, subjects showed errors in movement direction, which were almost equal to the magnitude of the rotation (mean error on the first second of the first learning trial: 10.3 ± 0.30°;
<xref ref-type="fig" rid="F2">Fig. 2
<italic>B</italic>
</xref>
). However, over the course of the 20-s trial, these direction errors decreased (mean error on the 20th second of the first learning trial: 4.8 ± 0.40°;
<xref ref-type="fig" rid="F2">Fig. 2
<italic>B</italic>
</xref>
). This learning was sustained across trials, such that mean error on subsequent trials was smaller than that on the previous trial (
<xref ref-type="fig" rid="F2">Fig. 2
<italic>A</italic>
</xref>
). Mean error on the first second of the last learning trial was 2.4 ± 0.2°, whereas on the 20th second, it had reduced to 0.06 ± 0.006° (
<xref ref-type="fig" rid="F2">Fig. 2
<italic>C</italic>
</xref>
). Thus over a few trials (mean ± SE = 6.58 ± 0.66 trials), the rotation was almost completely compensated, and direction error was close to zero, a significant change from the error observed on the first trial [t(11) = 37.68,
<italic>P</italic>
< 0.001,
<xref ref-type="fig" rid="F2">Fig. 2
<italic>A</italic>
</xref>
]. Adaptation was confirmed by the presence of after-effects in the control reaching task (
<xref ref-type="fig" rid="F2">Fig. 2
<italic>D</italic>
</xref>
) in which subjects made out-and-back reaching movements to four targets. In this task, whereas subjects made pointing errors of 2.97 ± 2.13° before adaptation, they showed clear after-effects and direction errors of 9.43 ± 6.50° after adapting to the 10° rotation [t(11) = 2.66,
<italic>P</italic>
= 0.02]. Collectively, the reduction in error during adaptation and the presence of after-effects in the control task are indicators of updates to internal predictions about sensory action outcomes (
<xref rid="B22" ref-type="bibr">Krakauer 2009</xref>
;
<xref rid="B26" ref-type="bibr">Mutha et al. 2011</xref>
;
<xref rid="B38" ref-type="bibr">Synofzik et al. 2006</xref>
).</p>
<fig id="F2" orientation="portrait" position="float">
<label>Fig. 2.</label>
<caption>
<p>Visuomotor adaptation during trajectory tracing in
<italic>experiment 1</italic>
.
<italic>A</italic>
: reduction in direction error across trials.
<italic>B</italic>
: reduction in direction error during the first 20-s tracing trial.
<italic>C</italic>
: reduction in direction error during the last 20-s tracing trial.
<italic>D</italic>
: mean direction error for each subject during the control reaching task in the pre- and postadaptation phases. Median values (bold lines) and quartiles (dotted lines) are shown. Error bars represent SE.</p>
</caption>
<graphic xlink:href="z9k0041635630002"></graphic>
</fig>
<p>After adaptation, subjects performed the search task in which the target could move in a random, hand aligned, or 10° rotated direction. The reaction time (RT) for reporting the color of target was similar in all three conditions [mean RT for rotated motion = 3.65 ± 0.15 s, hand-aligned motion = 3.72 ± 0.12 s, and random motion = 3.67 ± 0.12 s; one-way ANOVA F(2,22) = 0.10,
<italic>P</italic>
= 0.90]. However, as shown in
<xref ref-type="fig" rid="F3">Fig. 3
<italic>A</italic>
</xref>
, the accuracy of the color report was greatest when the target moved along the rotated path compared with random or hand-aligned motion [mean accuracy for rotated motion = 82.75 ± 1.60%, hand-aligned motion = 66.08 ± 1.68%, and random motion = 62.41 ± 1.38%; one-way ANOVA F(2,22) = 71.76,
<italic>P</italic>
< 0.001]. We explored whether this accuracy showed any time-dependent trends by dividing the search trials into bins, each containing 10% of the trials (
<xref ref-type="fig" rid="F3">Fig. 3
<italic>B</italic>
</xref>
). We observed a significant bin (first, last) X target motion (random, hand aligned, rotated) interaction [two-way ANOVA F(2,55) = 5.31,
<italic>P</italic>
= 0.007]. Importantly, post hoc Tukey's tests revealed that accuracy was significantly higher (
<italic>P</italic>
< 0.001) when the target moved along the rotated path compared with the other target motion conditions during the first bin (rotated vs. random:
<italic>P</italic>
< 0.001; rotated vs. hand aligned:
<italic>P</italic>
< 0.001), as well as the last bin (rotated vs. random:
<italic>P</italic>
< 0.001; rotated vs. hand aligned:
<italic>P</italic>
= 0.04). Mean accuracy of the color report for the target moving along the rotated path was 86.50 ± 2.29% in the first bin, whereas it was 66.58 ± 1.20% and 64.41 ± 2.24% for targets moving in the random and hand-aligned directions, respectively. This difference was maintained even on the last bin of search trials, where mean accuracy for targets moving along the rotated path was 74.75 ± 4.24% compared with 64.66 ± 3.90% and 52.16 ± 2.03% in the hand-aligned and random motion conditions, respectively. The decline in accuracy from the first to the last bin was significant in the rotated (
<italic>P</italic>
= 0.01) and random (
<italic>P</italic>
= 0.001) conditions but not the hand-aligned direction (
<italic>P</italic>
= 0.99). Nonetheless, throughout the search task, accuracy of the perceptual decision was greatest if the target moved not with the hand but along the rotated path, i.e., when actual (feedback about) target motion was consistent with the subjects' newly imposed stable predictions.</p>
<fig id="F3" orientation="portrait" position="float">
<label>Fig. 3.</label>
<caption>
<p>Performance during the visual search task of
<italic>experiment 1</italic>
.
<italic>A</italic>
: mean percent accuracy of the color report for the hand-aligned (dark gray), rotated (black), and random (light gray) target motion conditions.
<italic>B</italic>
: mean percent accuracy divided into bins. Accuracy was always highest in the condition where the target moved along the rotated path. Error bars represent SE.</p>
</caption>
<graphic xlink:href="z9k0041635630003"></graphic>
</fig>
</sec>
<sec id="sec2-2">
<title>Experiment 2</title>
<p>In our second experiment, subjects were required to perform a tone-discrimination task simultaneously with the adaptation task. This tone-discrimination task did not prevent reduction in errors during the 20-s trajectory tracing trial. Mean error on the first second of the first trial was 10.51 ± 0.24°, which reduced to 4.29 ± 0.47° by the 20th second (
<xref ref-type="fig" rid="F4">Fig. 4
<italic>B</italic>
</xref>
), a reduction that was clearly statistically significant [t(11) = 12.83,
<italic>P</italic>
< 0.001]. This pattern was also noted on the 10th adaptation trial [mean error on first second = 9.08 ± 0.30°, mean error on 20th second = 3.60 ± 0.45°, significant decrease from 1st to 20th second, t(11) = 13.26,
<italic>P</italic>
< 0.001,
<xref ref-type="fig" rid="F4">Fig. 4
<italic>C</italic>
</xref>
]. However, the within-trial improvement was not sustained across trials. Mean error on the 10th trial of the adaptation block was not significantly different than that of the first trial [mean error on first trial = 7.5 ± 0.60°; last trial = 6.88 ± 0.57°, t(11) = 1.36,
<italic>P</italic>
= 0.20;
<xref ref-type="fig" rid="F4">Fig. 4
<italic>A</italic>
</xref>
]. Since subjects failed to show an improvement across the 10 trials (as opposed to ∼7 trials in
<italic>experiment 1</italic>
), the adaptation session was stopped. The lack of adaptation was confirmed in the targeted reaching control task (
<xref ref-type="fig" rid="F4">Fig. 4
<italic>D</italic>
</xref>
). Mean preadaptation direction error in the control task was 2.82 ± 2.11°, whereas the postadaptation direction error was 3.27 ± 3.84°. The lack of significant difference in pre- and postadaptation errors [t(11) = 0.40,
<italic>P</italic>
= 0.69] pointed to a clear lack of after-effects, confirming that subjects did not adapt to the rotation. Thus the tone-discrimination task prevented the formation of a stable memory of the rotation, thereby preventing a sustained update of the subject's sensory predictions.</p>
<fig id="F4" orientation="portrait" position="float">
<label>Fig. 4.</label>
<caption>
<p>Lack of adaptation during trajectory tracing task in
<italic>experiment 2</italic>
.
<italic>A</italic>
: mean direction error across trials.
<italic>B</italic>
: reduction in direction error during the first 20-s tracing trial.
<italic>C</italic>
: reduction in direction error during the last 20-s tracing trial. The first and last trials appear the same.
<italic>D</italic>
: mean direction error for each subject in the control reaching task during the pre- and postadaptation phases. Median values (bold lines) and quartiles (dotted lines) are shown. Error bars represent SE.</p>
</caption>
<graphic xlink:href="z9k0041635630004"></graphic>
</fig>
<p>In the dynamic visual search task that followed the adaptation task, RT for reporting color of target was again similar in all three conditions [mean RT for rotated motion = 3.73 ± 0.09 s, hand-aligned motion = 3.80 ± 0.08 s, and random motion = 3.98 ± 0.14 s; one-way ANOVA F(2,22) = 1.11,
<italic>P</italic>
= 0.34]. Overall, accuracy of the color report was greater for the target that moved in the hand-aligned direction rather than along the rotated path [
<xref ref-type="fig" rid="F5">Fig. 5
<italic>A</italic>
</xref>
; one-way ANOVA, F(2,22) = 32.66,
<italic>P</italic>
< 0.001]. Interestingly, however, when we split the search trials into bins containing 10% of the trials, a significant bin (first, last) X target motion (random, hand aligned, rotated) interaction [two-way ANOVA, F(2,55) = 20.32,
<italic>P</italic>
< 0.001;
<xref ref-type="fig" rid="F5">Fig. 5
<italic>B</italic>
</xref>
] was observed. Post hoc Tukey's tests revealed that accuracy of the color report was significantly higher for the rotated target compared with the other target motion conditions in the first bin (mean accuracy for rotated condition = 73.75 ± 1.91%, hand aligned condition = 62.16 ± 1.69%, random condition = 53.58 ± 1.37%; rotated vs. random:
<italic>P</italic>
< 0.001, rotated vs. hand aligned:
<italic>P</italic>
= 0.01). However, in the last bin, accuracy was greatest for targets moving in the hand aligned direction and not along the rotated direction (mean accuracy for rotated condition = 52.08 ± 3.43%, hand aligned condition = 70.5 ± 2.87%, random condition = 52.16 ± 2.02%; rotated vs. random:
<italic>P</italic>
= 0.99, rotated vs. hand aligned:
<italic>P</italic>
< 0.001). This suggested that subjects initially relied on their modified predictions, but since these predictions were short lived and therefore, unreliable, rather than just depending on sensory information alone, subjects dynamically transitioned to using other, more stable predictions to maintain greater accuracy in their perceptual decisions.</p>
<fig id="F5" orientation="portrait" position="float">
<label>Fig. 5.</label>
<caption>
<p>Performance during the visual search task of
<italic>experiment 2</italic>
.
<italic>A</italic>
: mean percent accuracy of the color report for the hand-aligned (dark gray), rotated (black), and random (light gray) target motion conditions.
<italic>B</italic>
: mean percent accuracy divided into bins. Accuracy was greater initially for the rotated targets. However, with time, accuracy became greater for the hand-aligned targets. Error bars represent SE.</p>
</caption>
<graphic xlink:href="z9k0041635630005"></graphic>
</fig>
</sec>
<sec id="sec2-3">
<title>Experiment 3</title>
<p>We wondered whether the pattern of results seen in
<italic>experiment 2</italic>
emerged because subjects actually switch from initially relying on their weakly updated predictions to later, using their default, natural predictions. If so, then we predicted that in a “baseline” perceptual task, subjects should be more accurate in reporting the color of targets moving in the hand-aligned direction than in a random direction, for which perceptual information must be derived based on sensory information alone. We tested this idea in
<italic>experiment 3</italic>
and found this to be the case. Accuracy of the color report was significantly greater [t(11) = 8.70,
<italic>P</italic>
< 0.001] for the hand-aligned targets than the randomly moving targets (
<xref ref-type="fig" rid="F6">Fig. 6
<italic>A</italic>
</xref>
). This was the case throughout the search task (
<xref ref-type="fig" rid="F6">Fig. 6
<italic>B</italic>
</xref>
). Our bin (first, last) X target motion (hand aligned, random) motion did not reveal a significant interaction [F(1, 33) = 1.51,
<italic>P</italic>
= 0.23] but only a significant main effect of target motion [F(1, 33) = 109.23,
<italic>P</italic>
< 0.001]. Thus perceptual accuracy was indeed better when subjects could integrate their natural predictions with sensory feedback, and it suffered when these two sources were misaligned.</p>
<fig id="F6" orientation="portrait" position="float">
<label>Fig. 6.</label>
<caption>
<p>Performance during the perceptual task of
<italic>experiment 3</italic>
.
<italic>A</italic>
: mean percent accuracy of the color report for the hand-aligned (dark gray) and random (light gray) target motion conditions.
<italic>B</italic>
: mean percent accuracy divided into bins, with each bin containing 10% of the total trials. Accuracy was always greater for the hand-aligned targets. Error bars represent SE.</p>
</caption>
<graphic xlink:href="z9k0041635630006"></graphic>
</fig>
<p>In summary, across the three experiments, we noted that accuracy of perceptual judgments is enhanced when sensory predictions are integrated with actual sensory feedback. We also newly discovered that to maintain greater accuracy in the ability to extract perceptual information, subjects adaptively switch to relying on the most stable predictions available.</p>
</sec>
</sec>
<sec sec-type="discussion" id="sec3">
<title>DISCUSSION</title>
<p>For the current work, we used as a starting point the key suggestion made across a number of studies—that motor commands used to generate an action are also used to predict the sensory outcomes that might arise as a consequence of that action (
<xref rid="B2" ref-type="bibr">Bar 2007</xref>
;
<xref rid="B7" ref-type="bibr">Bubic et al. 2010</xref>
;
<xref rid="B17" ref-type="bibr">Hommel et al. 2001</xref>
;
<xref rid="B38" ref-type="bibr">Synofzik et al. 2006</xref>
;
<xref rid="B43" ref-type="bibr">von Holst and Mittelstaedt 1950</xref>
;
<xref rid="B44" ref-type="bibr">Waszak et al. 2012</xref>
). We demonstrate that use of these predicted action outcomes results in superior extraction of perceptual features of moving stimuli and that this enhancement relies on the adaptive use of the most stable predictions available. In this demonstration, two novel aspects of our study stand out: first, we show that perceptual decisions are poorer if they are based just on sensory observations. During active motion, this is nontrivial, because default action outcome predictions, available when any action is made, can always contribute, along with sensory feedback, during perceptual decisionmaking. We over-rode the influence of these predictions by imposing new, stable predictions via visuomotor adaptation and were then able to force perceptual estimates on sensory feedback alone by making the motion of the target in the perceptual task misaligned with the subjects' sensory expectations. In this case, the accuracy of the perceptual judgment was clearly worse compared to when sensory feedback and predictions were aligned. Second, we show, perhaps for the first time, that when some sensory predictions are unreliable, subjects dynamically switch to using whatever other stable predictions might be available for perceptual judgments. In our case, these were likely the subjects' natural predictions. This finding reveals the tremendous degree of sophistication within the nervous system, where in case of unstable predictions, perceptual estimates and decisions are not derived simply based on sensory information, as is typically suggested (
<xref rid="B21" ref-type="bibr">Kording and Wolpert 2004</xref>
), but rather, the most stable predictions available are co-opted to maintain higher fidelity of perceptual estimates and decisions. Finally, we also show that when the subjects' predictions are not modified, perceptual judgments are sharper when they entail the use of the subjects' natural predictions, along with sensory feedback, compared with conditions where these two sources of information are misaligned.</p>
<p>One of the ways in which a misalignment between the subjects' predictions and target motion was created in the perceptual judgment task was using a condition in which the target moved in a “random” direction relative to the hand (see
<sc>methods</sc>
). In this condition, however, one of the distractors moved congruent with the hand. Therefore, it could be argued that the accuracy of the perceptual judgment in the random condition was poor, because motion of a hand (and therefore, prediction)-aligned distractor was more distracting than other conditions, thereby worsening performance. However, our data suggest that this is unlikely to be the case. In
<italic>experiment 1</italic>
, the accuracy of the perceptual judgment in the random condition was not different than that in the hand-aligned condition in which target motion was congruent with the hand but inconsistent with the subjects' predictions (
<italic>P</italic>
= 0.13). Furthermore, in
<italic>experiment 2</italic>
, accuracy in the random condition was also not different than the rotated condition after the subjects had transitioned to using their stable, natural predictions for perceptual judgments (approximately the last 6 bins in
<xref ref-type="fig" rid="F5">Fig. 5
<italic>B</italic>
</xref>
); during this time again, target motion was inconsistent with the subjects' predictions. This implies that for judging and reporting the color of the target, all conditions in which target motion was inconsistent with the subjects' predictions were identical. Thus a condition in which a distractor moved consistent with the subjects' predictions (and therefore, the target did not) was not any more distracting than other conditions in which target motion was inconsistent with the subjects' predictions. If this were the case, accuracy in the random condition would have been much worse than the other conditions. Thus whereas we did not include a condition in which the motion of all stimuli in the perceptual task was truly random (there was always one stimulus that moved with the hand), this is unlikely to have impacted our results.</p>
<sec id="sec3-1">
<title>Influence of Sensory Predictions on Perceptual Estimates</title>
<p>Prior work, examining the role of predicted sensory action outcomes in coordinated behavior, has largely focused on the suggestion that such predictions are helpful in distinguishing whether incoming sensory information is the result of one's own action or due to changes in the environment (
<xref rid="B4" ref-type="bibr">Blakemore et al. 1999</xref>
;
<xref rid="B16" ref-type="bibr">Haarmeier et al. 2001</xref>
;
<xref rid="B34" ref-type="bibr">Sperry 1950</xref>
;
<xref rid="B38" ref-type="bibr">Synofzik et al. 2006</xref>
;
<xref rid="B43" ref-type="bibr">von Holst and Mittelstaedt 1950</xref>
). In this scheme, if actual sensory feedback is inconsistent with the predicted sensory outcomes, then the signals are attributed to an external, environmental source. In the case of a match between the predictions and actual sensory feedback, the cause of the signals is interpreted as being one's own action. This essentially results in enhancement of externally produced sensations and suppression of internally generated ones. This is reflected, for example, in the findings of
<xref rid="B4" ref-type="bibr">Blakemore and colleagues (1999</xref>
,
<xref rid="B5" ref-type="bibr">2000</xref>
), who demonstrated a notable decrease in the “ticklishness” of a stimulus with decreasing spatial and temporal separation between self-produced and externally produced tactile stimulation. Similarly, perceptual steadiness, observed despite changes in retinal images during eye motion (
<xref rid="B8" ref-type="bibr">Crowell et al. 1998</xref>
;
<xref rid="B24" ref-type="bibr">Lindner et al. 2001</xref>
) or changes in body posture (
<xref rid="B6" ref-type="bibr">Brooks and Cullen 2013</xref>
;
<xref rid="B9" ref-type="bibr">Cullen et al. 2011</xref>
), is thought to be dependent on a similar suppression of self-produced sensations. Thus collectively, these studies suggest that predicted sensory outcomes of self-motion, when aligned with actual sensory feedback, can ultimately be used to attenuate sensory afference. Our current results, however, suggest that sensory predictions need not always to be used for suppressing the perception of sensations. Rather, they can be combined with sensory feedback to enhance perceptual estimates of the state of the body and the world; in our study, alignment of sensory feedback about target motion during the visual search task with the subjects' modified or natural default predictions always resulted in better perceptual decisions (when these predictions were stable). Our findings thus agree with newer studies that have suggested, for example, that sensory predictions generated as a consequence of saccadic eye movement can be used to provide a more reliable estimate of the position of the target of that movement (
<xref rid="B41" ref-type="bibr">Vaziri et al. 2006</xref>
). Our findings are also in line with the report of
<xref rid="B28" ref-type="bibr">Phillips-Silver and Trainor (2005)</xref>
, who demonstrated enhanced auditory perceptual encoding of particular rhythm patterns in infants who were moved congruent with those patterns.
<xref rid="B11" ref-type="bibr">Desantis et al. (2014)</xref>
have recently proposed an elegant model to explain the divergent findings of suppression and enhancement of perceptual estimates that are derived using sensory predictions. They suggest that perception is attenuated when the intensity of a stimulus needs to be reported. In contrast, perception is enhanced when stimulus identification is required. Our task could be viewed as requiring stimulus identification, in that subjects had to identify and report the color of a moving target. This may then help to explain the enhanced accuracy of the color report when the target moved consistent with the subjects' predictions. Importantly, however, our study significantly expands the scope of prior studies by demonstrating that perceptual judgments appear to be influenced most strongly by the most stable predictions available. The stability in the predictive mechanism or forward model can be modified with training and cognitive load, as we have shown here, or conditions, such as aging, fatigue, or disease (
<xref rid="B33" ref-type="bibr">Shadmehr et al. 2010</xref>
), thus enslaving perceptual judgments to these conditions.</p>
</sec>
<sec id="sec3-2">
<title>Implications for Computational Models</title>
<p>Our results are aligned with the suggestions of computational models that propose that “priors” and sensory feedback are integrated in a Bayesian manner for optimal perception (
<xref rid="B21" ref-type="bibr">Kording and Wolpert 2004</xref>
;
<xref rid="B23" ref-type="bibr">Kwon and Knill 2013</xref>
;
<xref rid="B36" ref-type="bibr">Stocker and Simoncelli 2006</xref>
). Indeed, these priors can be viewed as predictions (
<xref rid="B33" ref-type="bibr">Shadmehr et al. 2010</xref>
), thus effectively equating the suggestions of the Bayesian models and our experiments here. It must be pointed out, however, that these computational models typically suggest that if the priors are unreliable, then perception is governed by sensory information alone. This is partly due to the fact that these models only include a single prior, in which unreliability is reflected as a large variance in the prior distribution. However, there is evidence that humans can learn more than one prior (
<xref rid="B27" ref-type="bibr">Nagai et al. 2012</xref>
), and it is not unrealistic to imagine that one of these priors could be unreliable. Our results indicate that in such cases, rather than relying on sensory information alone, as is generally suggested by Bayesian models, the nervous system adaptively transitions to using other stable priors for perceptual judgments. This is reflected in the fact that despite actual visual feedback about target motion being completely informative and reliable, subjects did not rely on it solely when the newly adapted prior was unreliable. If this were the case, then the accuracy would be similar, regardless of whether the target moved along the rotated path aligned with the hand or randomly. Instead, the results of our
<italic>experiment 2</italic>
suggest that subjects demonstrate an early reliance on their (weakly) adapted priors, but since these are fragile, they switch to using their natural priors, which were likely developed over their lifetime and were plausibly more stable. Even in
<italic>experiment 1</italic>
, the late reduction in accuracy of the color report of the target moving in the rotated direction and the slight (nonsignificant) but robust increase in accuracy for the hand-aligned target (bins 9 and 10;
<xref ref-type="fig" rid="F3">Fig. 3
<italic>B</italic>
</xref>
) could be consequences of the operation of the same mechanism, albeit at a much longer time scale, since the artificially imposed predictions were stable for much longer. Our findings thus also have implications for computational models of perception and suggest that these models incorporate such flexible reliance on the most stable priors available when deriving perceptual estimates.</p>
</sec>
<sec id="sec3-3">
<title>Role of Attention and Eye Movements</title>
<p>It could be argued that attentional factors contribute to the greater accuracy in the visual search task under some conditions. A number of studies suggest that targets presented near or congruent with the visible or invisible hand are detected more rapidly (
<xref rid="B30" ref-type="bibr">Reed et al. 2006</xref>
,
<xref rid="B29" ref-type="bibr">2010</xref>
), and people shift their attention away from such near-hand targets more slowly (
<xref rid="B1" ref-type="bibr">Abrams et al. 2008</xref>
). Such effects likely arise due to activation of bimodal sensory neurons in regions of the intraparietal sulcus, the supramarginal gyrus, and the dorsal and ventral premotor cortices (
<xref rid="B14" ref-type="bibr">Graziano 1999</xref>
;
<xref rid="B15" ref-type="bibr">Graziano et al. 1994</xref>
). Thus in the context of our results, it might appear that targets that move consistent with the subjects' predictions are perceptually more salient and are allocated greater attentional resources. This, in turn, allows them to be identified faster or more easily than other targets. However, it is important to recognize that there are two components to our visual search task: first, the target must be identified, and second, its color must be reported. Even if targets moving consistent with the subjects' predictions are indeed identified earlier, because they enjoy greater attentional priority, it is unclear how this translates into greater accuracy of the color report. Another possibility, however, is that accuracy was greater for these targets, because attentional priority to them allowed their features to be extracted within the allocated time. In contrast, there may simply not have been enough time to extract the required features of the target in the other conditions, since these conditions do not entail attentional priority, leading to poorer accuracy. However, subjects had enough time to search for and report the color of the target in all conditions; the subjects' RTs were much smaller than the maximum duration of a search trial. Furthermore, if time were not enough, we should have seen cases in which the target was simply not found. This was not the case either. Thus it appears that attentional factors did not specifically facilitate target identification. However, it is plausible that once a target was identified, attention was used to “lock in” on it so that if it moved consistent with the subjects' predictions, then tracking it among the moving distractors was less challenging compared with other target motion conditions (
<xref rid="B35" ref-type="bibr">Steinbach and Held 1968</xref>
;
<xref rid="B42" ref-type="bibr">Vercher et al. 1995</xref>
). This could allow subjects to spend more time with their eyes on the target, which could then enable the development of a better perceptual representation, ultimately enhancing the accuracy of the color report. However, if subjects spend more time on the target to allow a better representation to develop, then the time taken to report the color, manifested ultimately in the RT measure, could be longer. We did not observe this; RTs across the different target motion conditions were identical. One could still argue that a better representation could be developed within the same (reaction) time as the other target motion conditions, because tracking the target was easier. However, this raises another question of why, under the other target motion conditions, i.e., when the target did not move consistent with the predictions, subjects did not spend more time developing a better representation to increase their color report accuracy; more time was indeed available to do so. Future studies that integrate eye tracking with such paradigms can provide more insight into these issues.</p>
</sec>
<sec id="sec3-4">
<title>Potential Neural Substrates</title>
<p>Finally, it must be mentioned that previous research has suggested that the cerebellum is crucial for predicting the sensory consequences of action. Cerebellar lesions disrupt predictive control and also prevent recalibration of sensory predictions (
<xref rid="B3" ref-type="bibr">Bastian 2006</xref>
;
<xref rid="B18" ref-type="bibr">Izawa et al. 2012</xref>
;
<xref rid="B37" ref-type="bibr">Synofzik et al. 2008</xref>
). This suggests that in experiments, such as ours here, the advantage that sensory predictions provide for perceptual decisions should not be seen in individuals with cerebellar damage. More specifically, we expect that patients with cerebellar lesions will demonstrate similar accuracy across all target motion conditions in the perceptual task. This could be examined in future studies.</p>
</sec>
<sec id="sec3-5" sec-type="conclusions">
<title>Conclusion</title>
<p>In conclusion, we have shown that during active movement, the accuracy of decisions about perceptual attributes of moving stimuli is better if actual sensory feedback and sensory predictions are aligned and can be integrated. Importantly, in case these predictions are unstable or unreliable, the perceptual system appears to transition to using other stable predictions to enhance perceptual judgments rather than simply relying on sensory feedback alone. By uncovering this flexibility, our findings provide new insight into the organization of the perceptual system and also propose refinement of typical computational models of perception.</p>
</sec>
</sec>
<sec id="sec4">
<title>GRANTS</title>
<p content-type="financial-disclosure">Support for this work was provided by the
<funding-source id="gs1">Ramanujan Fellowship, Department of Science and Technology</funding-source>
, Government of India (to P. K. Mutha), and the Wellcome Trust-DBT India Alliance Early Career Fellowship (to N. Kumar).</p>
</sec>
<sec id="sec5">
<title>DISCLOSURES</title>
<p>No conflicts of interest, financial or otherwise, are declared by the authors.</p>
</sec>
<sec id="sec6">
<title>AUTHOR CONTRIBUTIONS</title>
<p>Author contributions: N.K. and P.K.M. conception and design of research; N.K. performed experiments; N.K. and P.K.M. analyzed data; N.K. and P.K.M. interpreted results of experiments; N.K. and P.K.M. prepared figures; N.K. and P.K.M. drafted manuscript; N.K. and P.K.M. edited and revised manuscript; N.K. and P.K.M. approved final version of manuscript.</p>
</sec>
</body>
<back>
<ack>
<title>ACKNOWLEDGMENTS</title>
<p>The authors thank Dr. Chris Miall (University of Birmingham) for helpful comments and suggestions. The authors also deeply acknowledge the research support provided by IIT Gandhinagar.</p>
</ack>
<ref-list>
<title>REFERENCES</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abrams</surname>
<given-names>RA</given-names>
</name>
,
<name>
<surname>Davoli</surname>
<given-names>CC</given-names>
</name>
,
<name>
<surname>Du</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Knapp</surname>
<given-names>WH</given-names>
<suffix>3rd</suffix>
</name>
,
<name>
<surname>Paull</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Altered vision near the hands</article-title>
.
<source>Cognition</source>
<volume>107</volume>
:
<fpage>1035</fpage>
<lpage>1047</lpage>
,
<year>2008</year>
.
<pub-id pub-id-type="pmid">17977524</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bar</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>The proactive brain: using analogies and associations to generate predictions</article-title>
.
<source>Trends Cogn Sci</source>
<volume>11</volume>
:
<fpage>280</fpage>
<lpage>289</lpage>
,
<year>2007</year>
.
<pub-id pub-id-type="pmid">17548232</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bastian</surname>
<given-names>AJ</given-names>
</name>
</person-group>
<article-title>Learning to predict the future: the cerebellum adapts feedforward movement control</article-title>
.
<source>Curr Opin Neurobiol</source>
<volume>16</volume>
:
<fpage>645</fpage>
<lpage>649</lpage>
,
<year>2006</year>
.
<pub-id pub-id-type="pmid">17071073</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blakemore</surname>
<given-names>SJ</given-names>
</name>
,
<name>
<surname>Frith</surname>
<given-names>CD</given-names>
</name>
,
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<article-title>Spatio-temporal prediction modulates the perception of self-produced stimuli</article-title>
.
<source>J Cogn Neurosci</source>
<volume>11</volume>
:
<fpage>551</fpage>
<lpage>559</lpage>
,
<year>1999</year>
.
<pub-id pub-id-type="pmid">10511643</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blakemore</surname>
<given-names>SJ</given-names>
</name>
,
<name>
<surname>Wolpert</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Frith</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Why can't you tickle yourself?</article-title>
<source>Neuroreport</source>
<volume>11</volume>
:
<fpage>R11</fpage>
<lpage>R16</lpage>
,
<year>2000</year>
.
<pub-id pub-id-type="pmid">10943682</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brooks</surname>
<given-names>JX</given-names>
</name>
,
<name>
<surname>Cullen</surname>
<given-names>KE</given-names>
</name>
</person-group>
<article-title>The primate cerebellum selectively encodes unexpected self-motion</article-title>
.
<source>Curr Biol</source>
<volume>23</volume>
:
<fpage>947</fpage>
<lpage>955</lpage>
,
<year>2013</year>
.
<pub-id pub-id-type="pmid">23684973</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bubic</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>von Cramon</surname>
<given-names>DY</given-names>
</name>
,
<name>
<surname>Schubotz</surname>
<given-names>RI</given-names>
</name>
</person-group>
<article-title>Prediction, cognition and the brain</article-title>
.
<source>Front Hum Neurosci</source>
<volume>4</volume>
:
<fpage>25</fpage>
,
<year>2010</year>
.
<pub-id pub-id-type="pmid">20631856</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Crowell</surname>
<given-names>JA</given-names>
</name>
,
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
,
<name>
<surname>Shenoy</surname>
<given-names>KV</given-names>
</name>
,
<name>
<surname>Andersen</surname>
<given-names>RA</given-names>
</name>
</person-group>
<article-title>Visual self-motion perception during head turns</article-title>
.
<source>Nat Neurosci</source>
<volume>1</volume>
:
<fpage>732</fpage>
<lpage>737</lpage>
,
<year>1998</year>
.
<pub-id pub-id-type="pmid">10196591</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cullen</surname>
<given-names>KE</given-names>
</name>
,
<name>
<surname>Brooks</surname>
<given-names>JX</given-names>
</name>
,
<name>
<surname>Jamali</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Carriot</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Massot</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Internal models of self-motion: computations that suppress vestibular reafference in early vestibular processing</article-title>
.
<source>Exp Brain Res</source>
<volume>210</volume>
:
<fpage>377</fpage>
<lpage>388</lpage>
,
<year>2011</year>
.
<pub-id pub-id-type="pmid">21286693</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<article-title>Visual-vestibular integration for self-motion perception</article-title>
. In:
<source>The Neural Bases of Multisensory Processes</source>
, edited by
<person-group person-group-type="editor">
<name>
<surname>Murray</surname>
<given-names>MM</given-names>
</name>
and
<name>
<surname>Wallace</surname>
<given-names>MT</given-names>
</name>
</person-group>
<publisher-loc>Boca Raton, FL</publisher-loc>
:
<publisher-name>CRC Press/Taylor & Francis</publisher-name>
,
<year>2012</year>
,
<comment>chapt. 31</comment>
.</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Desantis</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Roussel</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Waszak</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>The temporal dynamics of the perceptual consequences of action-effect prediction</article-title>
.
<source>Cognition</source>
<volume>132</volume>
:
<fpage>243</fpage>
<lpage>250</lpage>
,
<year>2014</year>
.
<pub-id pub-id-type="pmid">24853627</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
,
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
:
<fpage>429</fpage>
<lpage>433</lpage>
,
<year>2002</year>
.
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Girshick</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
,
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
</person-group>
<article-title>Cardinal rules: visual orientation perception reflects knowledge of environmental statistics</article-title>
.
<source>Nat Neurosci</source>
<volume>14</volume>
:
<fpage>926</fpage>
<lpage>932</lpage>
,
<year>2011</year>
.
<pub-id pub-id-type="pmid">21642976</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Graziano</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Where is my arm? The relative role of vision and proprioception in the neuronal representation of limb position</article-title>
.
<source>Proc Natl Acad Sci USA</source>
<volume>96</volume>
:
<fpage>10418</fpage>
<lpage>10421</lpage>
,
<year>1999</year>
.
<pub-id pub-id-type="pmid">10468623</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Graziano</surname>
<given-names>MS</given-names>
</name>
,
<name>
<surname>Yap</surname>
<given-names>GS</given-names>
</name>
,
<name>
<surname>Gross</surname>
<given-names>CG</given-names>
</name>
</person-group>
<article-title>Coding of visual space by premotor neurons</article-title>
.
<source>Science</source>
<volume>266</volume>
:
<fpage>1054</fpage>
<lpage>1057</lpage>
,
<year>1994</year>
.
<pub-id pub-id-type="pmid">7973661</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haarmeier</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Bunjes</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Lindner</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Berret</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Thier</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Optimizing visual motion perception during eye movements</article-title>
.
<source>Neuron</source>
<volume>32</volume>
:
<fpage>527</fpage>
<lpage>535</lpage>
,
<year>2001</year>
.
<pub-id pub-id-type="pmid">11709162</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hommel</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Musseler</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Aschersleben</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Prinz</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>The theory of event coding (TEC): a framework for perception and action planning</article-title>
.
<source>Behav Brain Sci</source>
<volume>24</volume>
:
<fpage>849</fpage>
<lpage>878</lpage>
; discussion
<fpage>878</fpage>
<lpage>937</lpage>
,
<year>2001</year>
.
<pub-id pub-id-type="pmid">12239891</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Izawa</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Criscimagna-Hemminger</surname>
<given-names>SE</given-names>
</name>
,
<name>
<surname>Shadmehr</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Cerebellar contributions to reach adaptation and learning sensory consequences of action</article-title>
.
<source>J Neurosci</source>
<volume>32</volume>
:
<fpage>4230</fpage>
<lpage>4239</lpage>
,
<year>2012</year>
.
<pub-id pub-id-type="pmid">22442085</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kok</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Brouwer</surname>
<given-names>GJ</given-names>
</name>
,
<name>
<surname>van Gerven</surname>
<given-names>MA</given-names>
</name>
,
<name>
<surname>de Lange</surname>
<given-names>FP</given-names>
</name>
</person-group>
<article-title>Prior expectations bias sensory representations in visual cortex</article-title>
.
<source>J Neurosci</source>
<volume>33</volume>
:
<fpage>16275</fpage>
<lpage>16284</lpage>
,
<year>2013</year>
.
<pub-id pub-id-type="pmid">24107959</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kok</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Jehee</surname>
<given-names>JF</given-names>
</name>
,
<name>
<surname>de Lange</surname>
<given-names>FP</given-names>
</name>
</person-group>
<article-title>Less is more: expectation sharpens representations in the primary visual cortex</article-title>
.
<source>Neuron</source>
<volume>75</volume>
:
<fpage>265</fpage>
<lpage>270</lpage>
,
<year>2012</year>
.
<pub-id pub-id-type="pmid">22841311</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kording</surname>
<given-names>KP</given-names>
</name>
,
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<article-title>Bayesian integration in sensorimotor learning</article-title>
.
<source>Nature</source>
<volume>427</volume>
:
<fpage>244</fpage>
,
<year>2004</year>
.
<pub-id pub-id-type="pmid">14724638</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Krakauer</surname>
<given-names>JW</given-names>
</name>
</person-group>
<article-title>Motor learning and consolidation: the case of visuomotor rotation</article-title>
.
<source>Adv Exp Med Biol</source>
<volume>629</volume>
:
<fpage>405</fpage>
<lpage>421</lpage>
,
<year>2009</year>
.
<pub-id pub-id-type="pmid">19227512</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kwon</surname>
<given-names>OS</given-names>
</name>
,
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
</person-group>
<article-title>The brain uses adaptive internal models of scene statistics for sensorimotor estimation and planning</article-title>
.
<source>Proc Natl Acad Sci USA</source>
<volume>110</volume>
:
<fpage>E1064</fpage>
<lpage>E1073</lpage>
,
<year>2013</year>
.
<pub-id pub-id-type="pmid">23440185</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lindner</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Schwarz</surname>
<given-names>U</given-names>
</name>
,
<name>
<surname>Ilg</surname>
<given-names>UJ</given-names>
</name>
</person-group>
<article-title>Cancellation of self-induced retinal image motion during smooth pursuit eye movements</article-title>
.
<source>Vision Res</source>
<volume>41</volume>
:
<fpage>1685</fpage>
<lpage>1694</lpage>
,
<year>2001</year>
.
<pub-id pub-id-type="pmid">11348650</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miall</surname>
<given-names>RC</given-names>
</name>
,
<name>
<surname>Christensen</surname>
<given-names>LO</given-names>
</name>
,
<name>
<surname>Cain</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Stanley</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Disruption of state estimation in the human lateral cerebellum</article-title>
.
<source>PLoS Biol</source>
<volume>5</volume>
:
<fpage>e316</fpage>
,
<year>2007</year>
.
<pub-id pub-id-type="pmid">18044990</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mutha</surname>
<given-names>PK</given-names>
</name>
,
<name>
<surname>Sainburg</surname>
<given-names>RL</given-names>
</name>
,
<name>
<surname>Haaland</surname>
<given-names>KY</given-names>
</name>
</person-group>
<article-title>Left parietal regions are critical for adaptive visuomotor control</article-title>
.
<source>J Neurosci</source>
<volume>31</volume>
:
<fpage>6972</fpage>
<lpage>6981</lpage>
,
<year>2011</year>
.
<pub-id pub-id-type="pmid">21562259</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nagai</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Suzuki</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Miyazaki</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Kitazawa</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Acquisition of multiple prior distributions in tactile temporal order judgment</article-title>
.
<source>Front Psychol</source>
<volume>3</volume>
:
<fpage>276</fpage>
,
<year>2012</year>
.
<pub-id pub-id-type="pmid">22912622</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Phillips-Silver</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Trainor</surname>
<given-names>LJ</given-names>
</name>
</person-group>
<article-title>Feeling the beat: movement influences infant rhythm perception</article-title>
.
<source>Science</source>
<volume>308</volume>
:
<fpage>1430</fpage>
,
<year>2005</year>
.
<pub-id pub-id-type="pmid">15933193</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reed</surname>
<given-names>CL</given-names>
</name>
,
<name>
<surname>Betz</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Garza</surname>
<given-names>JP</given-names>
</name>
,
<name>
<surname>Roberts</surname>
<given-names>RJ</given-names>
<suffix>Jr</suffix>
</name>
</person-group>
<article-title>Grab it! Biased attention in functional hand and tool space</article-title>
.
<source>Atten Percept Psychophys</source>
<volume>72</volume>
:
<fpage>236</fpage>
<lpage>245</lpage>
,
<year>2010</year>
.
<pub-id pub-id-type="pmid">20045892</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reed</surname>
<given-names>CL</given-names>
</name>
,
<name>
<surname>Grubb</surname>
<given-names>JD</given-names>
</name>
,
<name>
<surname>Steele</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Hands up: attentional prioritization of space near the hand</article-title>
.
<source>J Exp Psychol Hum Percept Perform</source>
<volume>32</volume>
:
<fpage>166</fpage>
<lpage>177</lpage>
,
<year>2006</year>
.
<pub-id pub-id-type="pmid">16478334</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Salomon</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Szpiro-Grinberg</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Lamy</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Self-motion holds a special status in visual processing</article-title>
.
<source>PLoS One</source>
<volume>6</volume>
:
<fpage>e24347</fpage>
,
<year>2011</year>
.
<pub-id pub-id-type="pmid">21998629</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schiffer</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Waszak</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Yeung</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>The role of prediction and outcomes in adaptive cognitive control</article-title>
.
<source>J Physiol</source>
<volume>109</volume>
:
<fpage>38</fpage>
<lpage>52</lpage>
,
<year>2015</year>
.</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shadmehr</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Smith</surname>
<given-names>MA</given-names>
</name>
,
<name>
<surname>Krakauer</surname>
<given-names>JW</given-names>
</name>
</person-group>
<article-title>Error correction, sensory prediction, and adaptation in motor control</article-title>
.
<source>Annu Rev Neurosci</source>
<volume>33</volume>
:
<fpage>89</fpage>
<lpage>108</lpage>
,
<year>2010</year>
.
<pub-id pub-id-type="pmid">20367317</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sperry</surname>
<given-names>RW</given-names>
</name>
</person-group>
<article-title>Neural basis of the spontaneous optokinetic response produced by visual inversion</article-title>
.
<source>J Comp Physiol Psychol</source>
<volume>43</volume>
:
<fpage>482</fpage>
<lpage>489</lpage>
,
<year>1950</year>
.
<pub-id pub-id-type="pmid">14794830</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Steinbach</surname>
<given-names>MJ</given-names>
</name>
,
<name>
<surname>Held</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Eye tracking of observer-generated target movements</article-title>
.
<source>Science</source>
<volume>161</volume>
:
<fpage>187</fpage>
<lpage>188</lpage>
,
<year>1968</year>
.
<pub-id pub-id-type="pmid">5657071</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stocker</surname>
<given-names>AA</given-names>
</name>
,
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
</person-group>
<article-title>Noise characteristics and prior expectations in human visual speed perception</article-title>
.
<source>Nat Neurosci</source>
<volume>9</volume>
:
<fpage>578</fpage>
<lpage>585</lpage>
,
<year>2006</year>
.
<pub-id pub-id-type="pmid">16547513</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Synofzik</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Lindner</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Thier</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>The cerebellum updates predictions about the visual consequences of one's behavior</article-title>
.
<source>Curr Biol</source>
<volume>18</volume>
:
<fpage>814</fpage>
<lpage>818</lpage>
,
<year>2008</year>
.
<pub-id pub-id-type="pmid">18514520</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Synofzik</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Thier</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Lindner</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Internalizing agency of self-action: perception of one's own hand movements depends on an adaptable prediction about the sensory action outcome</article-title>
.
<source>J Neurophysiol</source>
<volume>96</volume>
:
<fpage>1592</fpage>
<lpage>1601</lpage>
,
<year>2006</year>
.
<pub-id pub-id-type="pmid">16738220</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Taylor</surname>
<given-names>JA</given-names>
</name>
,
<name>
<surname>Thoroughman</surname>
<given-names>KA</given-names>
</name>
</person-group>
<article-title>Divided attention impairs human motor adaptation but not feedback control</article-title>
.
<source>J Neurophysiol</source>
<volume>98</volume>
:
<fpage>317</fpage>
<lpage>326</lpage>
,
<year>2007</year>
.
<pub-id pub-id-type="pmid">17460104</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Beers</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Sittig</surname>
<given-names>AC</given-names>
</name>
,
<name>
<surname>Gon</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<article-title>Integration of proprioceptive and visual position-information: an experimentally supported model</article-title>
.
<source>J Neurophysiol</source>
<volume>81</volume>
:
<fpage>1355</fpage>
<lpage>1364</lpage>
,
<year>1999</year>
.
<pub-id pub-id-type="pmid">10085361</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vaziri</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Diedrichsen</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Shadmehr</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Why does the brain predict sensory consequences of oculomotor commands? Optimal integration of the predicted and the actual sensory feedback</article-title>
.
<source>J Neurosci</source>
<volume>26</volume>
:
<fpage>4188</fpage>
<lpage>4197</lpage>
,
<year>2006</year>
.
<pub-id pub-id-type="pmid">16624939</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vercher</surname>
<given-names>JL</given-names>
</name>
,
<name>
<surname>Quaccia</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Gauthier</surname>
<given-names>GM</given-names>
</name>
</person-group>
<article-title>Oculo-manual coordination control: respective role of visual and non-visual information in ocular tracking of self-moved targets</article-title>
.
<source>Exp Brain Res</source>
<volume>103</volume>
:
<fpage>311</fpage>
<lpage>322</lpage>
,
<year>1995</year>
.
<pub-id pub-id-type="pmid">7789438</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>von Holst</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Mittelstaedt</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Das reafferenzprinzip</article-title>
.
<source>Naturwissenschaften</source>
<volume>37</volume>
:
<fpage>464</fpage>
<lpage>476</lpage>
,
<year>1950</year>
.</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Waszak</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Cardoso-Leite</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Hughes</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Action effect anticipation: neurophysiological basis and functional consequences</article-title>
.
<source>Neurosci Biobehav Rev</source>
<volume>36</volume>
:
<fpage>943</fpage>
<lpage>959</lpage>
,
<year>2012</year>
.
<pub-id pub-id-type="pmid">22108008</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
,
<name>
<surname>Flanagan</surname>
<given-names>JR</given-names>
</name>
</person-group>
<article-title>Motor prediction</article-title>
.
<source>Curr Biol</source>
<volume>11</volume>
:
<fpage>R729</fpage>
<lpage>R732</lpage>
,
<year>2001</year>
.
<pub-id pub-id-type="pmid">11566114</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
,
<name>
<surname>Miall</surname>
<given-names>RC</given-names>
</name>
</person-group>
<article-title>Forward models for physiological motor control</article-title>
.
<source>Neural Netw</source>
<volume>9</volume>
:
<fpage>1265</fpage>
<lpage>1279</lpage>
,
<year>1996</year>
.
<pub-id pub-id-type="pmid">12662535</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000509 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000509 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4808085
   |texte=   Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:26823516" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024