Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Continuous Evolution of Statistical Estimators for Optimal Decision-Making

Identifieur interne : 002206 ( Pmc/Curation ); précédent : 002205; suivant : 002207

Continuous Evolution of Statistical Estimators for Optimal Decision-Making

Auteurs : Ian Saunders ; Sethu Vijayakumar

Source :

RBID : PMC:3382620

Abstract

In many everyday situations, humans must make precise decisions in the presence of uncertain sensory information. For example, when asked to combine information from multiple sources we often assign greater weight to the more reliable information. It has been proposed that statistical-optimality often observed in human perception and decision-making requires that humans have access to the uncertainty of both their senses and their decisions. However, the mechanisms underlying the processes of uncertainty estimation remain largely unexplored. In this paper we introduce a novel visual tracking experiment that requires subjects to continuously report their evolving perception of the mean and uncertainty of noisy visual cues over time. We show that subjects accumulate sensory information over the course of a trial to form a continuous estimate of the mean, hindered only by natural kinematic constraints (sensorimotor latency etc.). Furthermore, subjects have access to a measure of their continuous objective uncertainty, rapidly acquired from sensory information available within a trial, but limited by natural kinematic constraints and a conservative margin for error. Our results provide the first direct evidence of the continuous mean and uncertainty estimation mechanisms in humans that may underlie optimal decision making.


Url:
DOI: 10.1371/journal.pone.0037547
PubMed: 22761657
PubMed Central: 3382620

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3382620

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Continuous Evolution of Statistical Estimators for Optimal Decision-Making</title>
<author>
<name sortKey="Saunders, Ian" sort="Saunders, Ian" uniqKey="Saunders I" first="Ian" last="Saunders">Ian Saunders</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Vijayakumar, Sethu" sort="Vijayakumar, Sethu" uniqKey="Vijayakumar S" first="Sethu" last="Vijayakumar">Sethu Vijayakumar</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22761657</idno>
<idno type="pmc">3382620</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3382620</idno>
<idno type="RBID">PMC:3382620</idno>
<idno type="doi">10.1371/journal.pone.0037547</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">002206</idno>
<idno type="wicri:Area/Pmc/Curation">002206</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Continuous Evolution of Statistical Estimators for Optimal Decision-Making</title>
<author>
<name sortKey="Saunders, Ian" sort="Saunders, Ian" uniqKey="Saunders I" first="Ian" last="Saunders">Ian Saunders</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Vijayakumar, Sethu" sort="Vijayakumar, Sethu" uniqKey="Vijayakumar S" first="Sethu" last="Vijayakumar">Sethu Vijayakumar</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>In many everyday situations, humans must make precise decisions in the presence of uncertain sensory information. For example, when asked to combine information from multiple sources we often assign greater weight to the more reliable information. It has been proposed that statistical-optimality often observed in human perception and decision-making requires that humans have access to the uncertainty of both their senses and their decisions. However, the mechanisms underlying the processes of uncertainty estimation remain largely unexplored. In this paper we introduce a novel visual tracking experiment that requires subjects to continuously report their evolving perception of the mean and uncertainty of noisy visual cues over time. We show that subjects accumulate sensory information over the course of a trial to form a continuous estimate of the mean, hindered only by natural kinematic constraints (sensorimotor latency etc.). Furthermore, subjects have access to a measure of their continuous objective uncertainty, rapidly acquired from sensory information available within a trial, but limited by natural kinematic constraints and a conservative margin for error. Our results provide the first direct evidence of the continuous mean and uncertainty estimation mechanisms in humans that may underlie optimal decision making.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Körding</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Faisal, Aa" uniqKey="Faisal A">AA Faisal</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
<author>
<name sortKey="Johnston, Eb" uniqKey="Johnston E">EB Johnston</name>
</author>
<author>
<name sortKey="Young, M" uniqKey="Young M">M Young</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jacobs, Ra" uniqKey="Jacobs R">RA Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hillis, Jm" uniqKey="Hillis J">JM Hillis</name>
</author>
<author>
<name sortKey="Watt, Sj" uniqKey="Watt S">SJ Watt</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J Heron</name>
</author>
<author>
<name sortKey="Whitaker, D" uniqKey="Whitaker D">D Whitaker</name>
</author>
<author>
<name sortKey="Mcgraw, Pv" uniqKey="Mcgraw P">PV McGraw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wallace, Mt" uniqKey="Wallace M">MT Wallace</name>
</author>
<author>
<name sortKey="Roberson, Ge" uniqKey="Roberson G">GE Roberson</name>
</author>
<author>
<name sortKey="Hairston, Wd" uniqKey="Hairston W">WD Hairston</name>
</author>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
<author>
<name sortKey="Vaughan, Jw" uniqKey="Vaughan J">JW Vaughan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Helbig, Hb" uniqKey="Helbig H">HB Helbig</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Saunders, Ja" uniqKey="Saunders J">JA Saunders</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rosas, P" uniqKey="Rosas P">P Rosas</name>
</author>
<author>
<name sortKey="Wagemans, J" uniqKey="Wagemans J">J Wagemans</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Wichmann, Fa" uniqKey="Wichmann F">FA Wichmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tassinari, H" uniqKey="Tassinari H">H Tassinari</name>
</author>
<author>
<name sortKey="Hudson, Te" uniqKey="Hudson T">TE Hudson</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D Burr</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
<author>
<name sortKey="Morrone, Mc" uniqKey="Morrone M">MC Morrone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jacobs, Ra" uniqKey="Jacobs R">RA Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barthelme, S" uniqKey="Barthelme S">S Barthelmé</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P Mamassian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Graf, Ew" uniqKey="Graf E">EW Graf</name>
</author>
<author>
<name sortKey="Warren, Pa" uniqKey="Warren P">PA Warren</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kepecs, A" uniqKey="Kepecs A">A Kepecs</name>
</author>
<author>
<name sortKey="Uchida, N" uniqKey="Uchida N">N Uchida</name>
</author>
<author>
<name sortKey="Zariwala, Ha" uniqKey="Zariwala H">HA Zariwala</name>
</author>
<author>
<name sortKey="Mainen, Zf" uniqKey="Mainen Z">ZF Mainen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Helbig, Hb" uniqKey="Helbig H">HB Helbig</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barthelme, S" uniqKey="Barthelme S">S Barthelmé</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P Mamassian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Resulaj, A" uniqKey="Resulaj A">A Resulaj</name>
</author>
<author>
<name sortKey="Kiani, R" uniqKey="Kiani R">R Kiani</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
<author>
<name sortKey="Shadlen, Mn" uniqKey="Shadlen M">MN Shadlen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gepshtein, S" uniqKey="Gepshtein S">S Gepshtein</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gepshtein, S" uniqKey="Gepshtein S">S Gepshtein</name>
</author>
<author>
<name sortKey="Burge, J" uniqKey="Burge J">J Burge</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nassar, Mr" uniqKey="Nassar M">MR Nassar</name>
</author>
<author>
<name sortKey="Wilson, Rc" uniqKey="Wilson R">RC Wilson</name>
</author>
<author>
<name sortKey="Heasly, B" uniqKey="Heasly B">B Heasly</name>
</author>
<author>
<name sortKey="Gold, Ji" uniqKey="Gold J">JI Gold</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bjorkman, M" uniqKey="Bjorkman M">M Björkman</name>
</author>
<author>
<name sortKey="Juslin, P" uniqKey="Juslin P">P Juslin</name>
</author>
<author>
<name sortKey="Winman, A" uniqKey="Winman A">A Winman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brenner, E" uniqKey="Brenner E">E Brenner</name>
</author>
<author>
<name sortKey="Smeets, Jbj" uniqKey="Smeets J">JBJ Smeets</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trommersh User, J" uniqKey="Trommersh User J">J Trommershäuser</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schot, Wd" uniqKey="Schot W">WD Schot</name>
</author>
<author>
<name sortKey="Brenner, E" uniqKey="Brenner E">E Brenner</name>
</author>
<author>
<name sortKey="Smeets, Jbj" uniqKey="Smeets J">JBJ Smeets</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, Rj" uniqKey="Van Beers R">RJ van Beers</name>
</author>
<author>
<name sortKey="Sittig, Ac" uniqKey="Sittig A">AC Sittig</name>
</author>
<author>
<name sortKey="Gon, Jj" uniqKey="Gon J">JJ Gon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oruc, I" uniqKey="Oruc I">I Oruç</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kiani, R" uniqKey="Kiani R">R Kiani</name>
</author>
<author>
<name sortKey="Shadlen, Mn" uniqKey="Shadlen M">MN Shadlen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
<author>
<name sortKey="Dayan, P" uniqKey="Dayan P">P Dayan</name>
</author>
<author>
<name sortKey="Zemel, R" uniqKey="Zemel R">R Zemel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saunders, I" uniqKey="Saunders I">I Saunders</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lawson, C" uniqKey="Lawson C">C Lawson</name>
</author>
<author>
<name sortKey="Hanson, R" uniqKey="Hanson R">R Hanson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Coleman, T" uniqKey="Coleman T">T Coleman</name>
</author>
<author>
<name sortKey="Li, Y" uniqKey="Li Y">Y Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Coleman, T" uniqKey="Coleman T">T Coleman</name>
</author>
<author>
<name sortKey="Li, Y" uniqKey="Li Y">Y Li</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22761657</article-id>
<article-id pub-id-type="pmc">3382620</article-id>
<article-id pub-id-type="publisher-id">PONE-D-11-22744</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0037547</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Neuroscience</subject>
<subj-group>
<subject>Cognition</subject>
<subject>Decision Making</subject>
<subject>Motor Reactions</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Computational Neuroscience</subject>
<subj-group>
<subject>Sensory Systems</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sensory Systems</subject>
<subj-group>
<subject>Visual System</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Mathematics</subject>
<subj-group>
<subject>Probability Theory</subject>
</subj-group>
<subj-group>
<subject>Statistics</subject>
<subj-group>
<subject>Confidence Intervals</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Continuous Evolution of Statistical Estimators for Optimal Decision-Making</article-title>
<alt-title alt-title-type="running-head">Continuous Estimation for Optimal Decision-Making</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Saunders</surname>
<given-names>Ian</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Vijayakumar</surname>
<given-names>Sethu</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
</contrib>
</contrib-group>
<aff id="aff1">
<addr-line>Institute of Perception, Action and Behaviour, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Ernst</surname>
<given-names>Marc O.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">Bielefeld University, Germany</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>i.saunders@sms.ed.ac.uk</email>
</corresp>
<fn fn-type="con">
<p>Drafting the article and revising it critically for important intellectual content: IS SV. Conceived and designed the experiments: IS SV. Performed the experiments: IS. Analyzed the data: IS. Wrote the paper: IS.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<pub-date pub-type="epub">
<day>25</day>
<month>6</month>
<year>2012</year>
</pub-date>
<volume>7</volume>
<issue>6</issue>
<elocation-id>e37547</elocation-id>
<history>
<date date-type="received">
<day>14</day>
<month>11</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>25</day>
<month>4</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Saunders, Vijayakumar. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
<copyright-year>2012</copyright-year>
</permissions>
<abstract>
<p>In many everyday situations, humans must make precise decisions in the presence of uncertain sensory information. For example, when asked to combine information from multiple sources we often assign greater weight to the more reliable information. It has been proposed that statistical-optimality often observed in human perception and decision-making requires that humans have access to the uncertainty of both their senses and their decisions. However, the mechanisms underlying the processes of uncertainty estimation remain largely unexplored. In this paper we introduce a novel visual tracking experiment that requires subjects to continuously report their evolving perception of the mean and uncertainty of noisy visual cues over time. We show that subjects accumulate sensory information over the course of a trial to form a continuous estimate of the mean, hindered only by natural kinematic constraints (sensorimotor latency etc.). Furthermore, subjects have access to a measure of their continuous objective uncertainty, rapidly acquired from sensory information available within a trial, but limited by natural kinematic constraints and a conservative margin for error. Our results provide the first direct evidence of the continuous mean and uncertainty estimation mechanisms in humans that may underlie optimal decision making.</p>
</abstract>
<counts>
<page-count count="14"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Uncertainty is a fundamental property of the world, as any avid butterfly collector will attest. To anticipate the fluttering flight of
<italic>papilionoidea</italic>
, one must wait patiently, accumulating evidence about the underlying statistics of its rapid and unpredictable movements. Success is only achieved when one is prepared with a large enough net to accommodate the variability in both the butterfly’s trajectory and the movement of one’s arm.</p>
<p>To handle the inevitable uncertainty in the world, people make decisions based on previous experience, as well as statistical information acquired directly from stimuli. For example, the statistics of the environment govern our perceptions and our decision making processes when we reach for targets
<xref ref-type="bibr" rid="pone.0037547-Krding1">[1]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Faisal1">[2]</xref>
, interpret visual scenes
<xref ref-type="bibr" rid="pone.0037547-Landy1">[3]</xref>
<xref ref-type="bibr" rid="pone.0037547-Hillis1">[5]</xref>
and combine multiple sensory modalities
<xref ref-type="bibr" rid="pone.0037547-Ernst1">[6]</xref>
<xref ref-type="bibr" rid="pone.0037547-Helbig1">[9]</xref>
. This growing body of psychophysical experiments supports the proposition that some aspects of perception are statistically-optimal, in the sense that decisions made are often quantitatively indistinguishable from a
<italic>maximum-likelihood</italic>
ideal observer (although some studies are inconsistent with this theory
<xref ref-type="bibr" rid="pone.0037547-Knill1">[10]</xref>
<xref ref-type="bibr" rid="pone.0037547-Burr1">[13]</xref>
). To achieve optimality when combining multiple sensory cues, the nervous system requires an estimate of the reliabiliy of the sensory information
<xref ref-type="bibr" rid="pone.0037547-Landy1">[3]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Jacobs2">[14]</xref>
. However, despite its fundamental importance to the theory, the question of
<italic>how</italic>
humans gather the relevant statistical information to make their optimal decisions remains largely unexplored
<xref ref-type="bibr" rid="pone.0037547-Barthelm1">[15]</xref>
.</p>
<p>The theory of statistical optimality in the brain relies crucially on the fact that humans must somehow accumulate statistical information from unpredictable stimuli. For example they may need to estimate not only the
<italic>mean</italic>
, but the expected variability in this estimate of the mean (or their
<italic>confidence</italic>
). Recently, it was shown that humans are not only able to predict the position of objects moving along random or noisy trajectories, but also that they are able to report a level of confidence in this prediction
<xref ref-type="bibr" rid="pone.0037547-Graf1">[16]</xref>
. This is not a uniquely human capacity: rats are also capable of uncertainty-based decisions
<xref ref-type="bibr" rid="pone.0037547-Kepecs1">[17]</xref>
. It has been shown that
<italic>subjective</italic>
perception of uncertainty is closely related to the
<italic>objective</italic>
uncertainty (the measured variability in performance)
<xref ref-type="bibr" rid="pone.0037547-Barthelm1">[15]</xref>
, indicating that subjects are, indeed, acutely aware of the uncertainty in their decisions.</p>
<p>The forced-choice paradigm is classically used to compare decisions under uncertainty (e.g.
<xref ref-type="bibr" rid="pone.0037547-Ernst1">[6]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Alais1">[18]</xref>
). However, it has been argued that uncertainty may indirectly modulate behaviour in such designs (see
<xref ref-type="bibr" rid="pone.0037547-Helbig2">[19]</xref>
and discussion), and a direct approach is preferred
<xref ref-type="bibr" rid="pone.0037547-Graf1">[16]</xref>
. In this study we focus on a continuous decision-making task in which we require subjects to actively report their estimates of the mean and confidence of uncertain visual stimuli. We will ask the question of how these estimates are formed from the evidence provided, specifically addressing how the visual cues that comprise the stimulus are integrated to form a robust percept of its mean and variance.</p>
<p>To achieve these aims we present a novel experimental paradigm that requires subjects to explicitly track the mean and variance of noise-perturbed visual cues. We control the arrival of noisy visual stimuli over time, allowing us to monitor the behavioural consequences as sensory evidence accumulates. In two variants of our “butterfly catching” task we ask subjects to (i) track the
<italic>mean</italic>
of “fluttering” visual cues (
<italic>viz.</italic>
localising a butterfly); and (ii) indicate the
<italic>range</italic>
in which they believe the mean of the cues to lie (
<italic>viz.</italic>
choosing an appropriate size of net).</p>
<p>From trial-to-trial we modulate the underlying distribution of the cues, allowing us to observe the evolution of mean and confidence estimates with respect to the visual cues responsible for their formation. Using a sensorimotor model we show the extent to which the observed trajectories are statistically-optimal under the kinematic limitations of human motion, while computation of the weights allocated to each visual cue over time allows us to expose the mechanisms of sensory integration underlying the processes of continuous estimation.</p>
</sec>
<sec id="s2">
<title>Results</title>
<sec id="s2a">
<title>Experimental Paradigm</title>
<p>In this paper we introduce the “butterfly catching” paradigm, illustrated in
<xref ref-type="fig" rid="pone-0037547-g001">figure 1</xref>
. Subjects are required to judge the statistical properties of a “fluttering” temporal sequence of visual stimuli which are projected onto the line of their left forearm. Subjects localise the stimuli with a variable sized “net”, indicated by lines projected from the forefinger and thumb of their right hand. They are successful in a given trial if the mean of the stimuli lies within the aperture of their net, and are given points at the end of each trial if successful. For a complete description of these details see
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
.</p>
<fig id="pone-0037547-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037547.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Experiment Setup.</title>
<p>Illustration of the
<italic>butterfly-catching experiment</italic>
setup. (
<bold>A</bold>
)
<bold>Projection Rig.</bold>
Subjects placed their left forearm under a mirror, and used their right hand to localise 2D visual stimuli that appeared at a random
<italic>target location,</italic>
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e001.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, along the forearm. (
<bold>B</bold>
)
<bold>Cursor Control.</bold>
Using a mirror aligned with a rear-projection screen we presented visual feedback onto the horizontal plane of the arm. We used a 3D magnetic tracking system to record forearm and finger positions. Finger positions were represented by a 2D visual cursor and the arm by a target line. Visual cues (top half of figure) were aligned veridically with tactile and proprioceptive cues (bottom half of figure). (
<bold>C</bold>
)
<bold>Manipulations.</bold>
A total of 15 visual cues were presented in each trial. Each cue, lasting 250 ms, was chosen from an underlying distribution with mean
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e002.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and variance
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e003.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. On each trial we randomly varied
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e004.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to manipulate the uncertainty of the cue distribution. On each trial we randomly perturbed the mean of one-third of the cues by
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e005.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(and shifted the remaining cues by
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e006.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, preserving the overall mean). In the figure we show a negative perturbation of the second block, exaggerated in magnitude for illustrative purposes. (
<bold>D</bold>
)
<bold>Tasks.</bold>
Subjects performed two tasks: (i) In Task 1, subjects were asked to estimate the
<italic>mean</italic>
of the stimuli with the position of their right hand, indicated by a fixed-aperture visual cursor; (ii) in Task 2 they were asked to indicate the
<italic>range</italic>
in which they believed the mean to lie with the spacing of their thumb and forefinger, indicated by a variable aperture visual cursor. (
<bold>E</bold>
)
<bold>Visual Cues.</bold>
Each visual cue is composed of a sequence of 5 random dot clouds, one of which is shown for illustration.</p>
</caption>
<graphic xlink:href="pone.0037547.g001"></graphic>
</fig>
<p>The fluttering visual cues are a sequence of blurry dot-clouds, with cloud locations distributed in time according to a pseudo-Normal distribution with mean
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e007.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and variance
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e008.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(see
<xref ref-type="fig" rid="pone-0037547-g001">figure 1C</xref>
and
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
). The perceived uncertainty of clusters of noisy visual samples changes as a predictable function of their number
<xref ref-type="bibr" rid="pone.0037547-Tassinari1">[12]</xref>
, but in the present study the noisy clusters are distributed in time rather than space so that we can examine the continuously evolving perception of the mean and uncertainty of the stimuli as evidence arrives over time.</p>
<p>In Task 1 we examine subjects’ ability to estimate the mean,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e009.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, of the visual stimuli using a cursor with small fixed aperture (
<xref ref-type="fig" rid="pone-0037547-g001">figure 1D</xref>
,
<italic>left</italic>
). We modulate the variance of the visual cues,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e010.jpg" mimetype="image"></inline-graphic>
</inline-formula>
from trial-to-trial. The maximum score is attained when subjects navigate to the true mean of the stimuli.</p>
<p>In Task 2 subjects must instead indicate the range of values in which they believe the mean to lie, using a variable cursor aperture (
<xref ref-type="fig" rid="pone-0037547-g001">figure 1D</xref>
,
<italic>right</italic>
), with width determined by the distance between the thumb and forefinger. From Task 1 we establish a linear mapping from
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e011.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to mean endpoint error to provide performance feedback in Task 2 that forces subjects to report their
<italic>objective uncertainty</italic>
(by optimising the trade-off between accuracy and point-scoring). We assume subjects can acquire this mapping during the 450 trials preceding Task 2.</p>
<p>Task 2 demands subjects to report their mean and confidence estimates simultaneously, providing a unified paradigm to evaluate the mechanisms underlying the formation of these statistical estimators. To expose these mechanisms we manipulate the distributions of the stimuli from trial-to-trial in two ways: (i) we modulate the variance of the visual cues,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e012.jpg" mimetype="image"></inline-graphic>
</inline-formula>
; and (ii) we add perturbations to subsets of the cues, (block
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e013.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, direction
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e014.jpg" mimetype="image"></inline-graphic>
</inline-formula>
).
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e015.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e016.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e017.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are chosen randomly from trial-to-trial.</p>
<p>In manipulating the cue variance (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e018.jpg" mimetype="image"></inline-graphic>
</inline-formula>
<italic>low</italic>
,
<italic>medium</italic>
and
<italic>high</italic>
) we hypothesised that subjects would estimate the mean and (based on
<xref ref-type="bibr" rid="pone.0037547-Barthelm1">[15]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Barthelm2">[20]</xref>
) report the objective variability in their performance. An increase in cue variance should be reflected in both an increased distribution of errors in localising the mean and decreased confidence.</p>
<p>To induce perturbations we divided the sequence of cues on a given trial into three blocks (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e019.jpg" mimetype="image"></inline-graphic>
</inline-formula>
<italic>early</italic>
,
<italic>middle</italic>
,
<italic>late</italic>
) and shifted cues in a given block by
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e020.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in a chosen direction (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e021.jpg" mimetype="image"></inline-graphic>
</inline-formula>
<italic>negative</italic>
,
<italic>positive, neutral</italic>
). All other cues were shifted in the opposite direction by
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e022.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, so that the overall mean remained the same. We hypothesised that subjects would integrate the cues over time to compute mean and confidence estimates. By inducing within-trial cue perturbations we can infer the contribution of each cue in the sequence to the final decision.</p>
<p>We found that subjects were equally good at mean estimation in both tasks, shown in
<italic>
<xref ref-type="supplementary-material" rid="pone.0037547.s001">Figure S1</xref>
</italic>
. To compare the two tasks (excluding trials with perturbations) we conducted a within-subjects analysis of variance (ANOVA) on the mean endpoint error (the mean absolute deviation of the final mean estimate from the target), with a two-level factor of task (
<italic>Task 1</italic>
and
<italic>Task 2</italic>
) and three-level factor of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e023.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<italic>low</italic>
,
<italic>medium</italic>
,
<italic>high</italic>
). This revealed a significant main effect of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e024.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e025.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e026.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) but no main effect of task (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e027.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e028.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and no interaction (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e029.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e030.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). The significant effect of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e031.jpg" mimetype="image"></inline-graphic>
</inline-formula>
confirms that the variance manipulation increases the task difficulty as expected. The absence of task effect indicates that Task 1 performance variability is a reliable predictor of Task 2 performance variability, justifying the score function used in Task 2.</p>
</sec>
<sec id="s2b">
<title>Continuous Estimation of the Mean</title>
<p>In
<xref ref-type="fig" rid="pone-0037547-g002">figure 2</xref>
we present the resulting trajectories for a typical subject performing Task 2.
<xref ref-type="fig" rid="pone-0037547-g002">Figure 2A</xref>
shows four example trajectories which illustrate the consequence of early, middle and late-onset perturbations on decisions. From the smooth trajectories it appears that subjects gradually accumulate sensory evidence, responding (after a delay) to perturbations. Though there is high variability across trials (
<xref ref-type="fig" rid="pone-0037547-g002">figure 2C</xref>
) we observe distinct trajectories for the different experimental manipulations (
<xref ref-type="fig" rid="pone-0037547-g002">figure 2B</xref>
).</p>
<fig id="pone-0037547-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037547.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Data for a single subject in Task 2.</title>
<p>(
<bold>A</bold>
)
<bold>Typical trajectories</bold>
for four experimental conditions. On each trial the subject’s estimates of the mean (solid line) and confidence (dotted line) are affected by the sequence of cues (black dots). From left to right we plot the no perturbation, early, middle and late-onset perturbation conditions. Perturbation of different blocks (shaded and with arrow) results in corresponding trajectory deviations. (
<bold>B</bold>
)
<bold>Average trajectories for one subject.</bold>
We plot the average trajectories for one subject for negative (blue), zero (purple) and positive (red) perturbations, for each
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e032.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The averages for each condition (darker lines) highlight the main trends. (
<bold>C</bold>
)
<bold>Endpoint Variability.</bold>
There is a high level of variability in the trajectories in B, though much of this may be explained by the added variance and perturbations. We plot the mean (solid line)
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e033.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the variance (dotted line) of the
<italic>endpoint of the trajectory</italic>
for each experimental condition to illustrate this. Late-onset perturbations result in greater endpoint errors and endpoint variability scales with
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e034.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
</caption>
<graphic xlink:href="pone.0037547.g002"></graphic>
</fig>
<p>In
<xref ref-type="fig" rid="pone-0037547-g003">figure 3</xref>
we present the results averaged across subjects. The distinguishing features of the empirical trajectories (
<xref ref-type="fig" rid="pone-0037547-g003">figure 3A</xref>
) are (i) high initial variability (arrow
<italic>a</italic>
); (ii) trajectory deviations shortly after the onset of the perturbation (arrows
<italic>b</italic>
,
<italic>d</italic>
and
<italic>f</italic>
); (iii) spontaneous changes in direction (i.e. inflexions, arrows
<italic>c</italic>
and
<italic>e</italic>
); and (iv) endpoint errors (deviations of the final estimate from the target,
<xref ref-type="fig" rid="pone-0037547-g003">figures 3B, 3C and 3D</xref>
); From the interval of the standard error across subjects it is apparent that these phenomena are robust. Note that in
<xref ref-type="fig" rid="pone-0037547-g003">figure 3A</xref>
the trajectories are centred on the true target location (the average of all cues in the sequence, including those which are perturbed). Recall that the perturbation of a given block is balanced by perturbations of half-magnitude of the remaining blocks in order to preserve the overall mean. This results in deviations that oppose the larger perturbation prior to its onset and follow the larger perturbation after its onset (for example, note that that a rightward perturbation in block 3 is balanced by a leftward perturbation of blocks 1 and 2). Responses to perturbations demonstrate the within-trial contribution of cues to perception.</p>
<fig id="pone-0037547-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037547.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Continuous mean estimation data grouped across subjects.</title>
<p>(
<bold>A</bold>
)
<bold>Average Trajectories.</bold>
We show the average empirical trajectories across subjects compared to our model predictions. Trajectories are computed for each subject by averaging over all trials for each condition. From left to right we plot the no perturbation, early, middle and late-onset perturbation conditions (shaded). The empirical trajectories for negative (blue), zero (purple) and positive (red) perturbations are plotted for each value of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e035.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(labelled). Each trajectory shows the mean across subjects
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e036.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the standard error of the mean (SEM). Key features of the empirical data include cue-induced deviations (arrows b, d and f) and subsequent corrections as further evidence arrives (arrows c and e). Note the qualitative and quantitative nature of the model fit to the data (dashed line). (
<bold>B</bold>
)
<bold>Endpoint mean and variability.</bold>
At the end of each trial the position of the cursor represents subjects’ final estimate of the
<italic>mean</italic>
, and the width of the cursor represents subjects’ final estimate of the
<italic>confidence</italic>
. For each of the experimental conditions we plot the mean across subjects
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e037.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SEM of the left bound of the confidence estimate, the mean estimate and the right bound of the confidence estimate. Subjects show increasing confidence windows for larger values of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e038.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(from top to bottom) and show deviations from the target as a result of the perturbations (red and blue). (
<bold>C</bold>
)
<bold>Endpoint Error.</bold>
For each of the experimental conditions we show how the final deviation of the mean from the target is a predictable function of variance
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e039.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, perturbation magnitude
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e040.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and block
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e041.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The model makes a reasonable quantitative fit for all conditions, though note that it does not capture the asymmetry in the empirical data (which is slightly positively biased) (
<bold>D</bold>
)
<bold>Absolute Endpoint Error.</bold>
The final absolute deviation of the mean from the target captures the average error in the task. This error increases with
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e042.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and with perturbations, the magnitude of which is also explained by the model.</p>
</caption>
<graphic xlink:href="pone.0037547.g003"></graphic>
</fig>
<p>We devised a model of motor behaviour to account for the latencies observed in decisions (see
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
). The model observer integrates the visual cues in a statistically optimal fashion (by computing the maximum likelihood mean estimate). This estimate manifests itself through the movement of the cursor, which we constrain by introducing three parameters, namely sensorimotor latency,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e043.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, maximum speed,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e044.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and momentum
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e045.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(see
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
). This model accounts both qualitatively and quantitatively for the key features of the empirical data, such as the magnitude and timing of direction changes, and the magnitude of endpoint deviation and endpoint error (
<xref ref-type="fig" rid="pone-0037547-g003">figure 3C and 3D</xref>
). The model parameters were optimised per-subject to ensure the best possible fit to the data (see
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
), but these parameters are global to all conditions and have no capacity to explain the role of individual cues on decisions, nor the effects of cue perturbations or variance (see
<italic>
<xref ref-type="sec" rid="s3">Discussion</xref>
</italic>
). In
<xref ref-type="fig" rid="pone-0037547-g003">figure 3C</xref>
we see that the empirical data is biased in the positive direction. For the unperturbed condition, the model predicts an average deviation of zero but the empirical data shows a +6 pixel deviation. It is unlikely that this small systematic error is due to an alignment issue between the visual stimuli and the hand, as the apparatus was carefully calibrated and the effects of visual-spatial mismatch on task performance are expected to be minimal
<xref ref-type="bibr" rid="pone.0037547-Helbig2">[19]</xref>
. We suspect that the systematic error may be due to subjects’ preference for certain limb configurations and is an unavoidable consequence of the task. Nevertheless, the timing and magnitude of the key features of the empirical data are accurately predicted by our model. This indicates that subjects can form a continuous estimate of the mean which evolves over time as evidence arrives.</p>
</sec>
<sec id="s2c">
<title>Mechanisms of Temporal Cue Integration</title>
<p>To understand the mechanisms by which subjects estimate the mean we can infer the contribution of each of the visual cues to the evolving estimates. These are computed per-subject by linearly regressing the cue locations to the decision made at each time-step, over all trials (for full details see
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
).</p>
<p>
<xref ref-type="fig" rid="pone-0037547-g004">Figure 4</xref>
shows the resultant cue weights for the empirical trajectories (
<xref ref-type="fig" rid="pone-0037547-g004">figures 4A–4D</xref>
) and the model trajectories (4E–4H). Our regression method assigns a weight to each cue (including cues that have not yet been observed), quantifying its contribution to the decision at each time step. The weight assigned to future cues provides useful validation that the regression method is successfully discriminating the contributions of each cue and not fitting noise. During the initial 0.5 s of the trajectory we see that causality can
<italic>not</italic>
be reliably discerned, and therefore all cues (including future ones) are equally weighted (
<xref ref-type="fig" rid="pone-0037547-g004">figure 4C</xref>
). However, after this brief initial stage we see that the weight assigned to future cues declines, indicating that empirical decisions are correctly attributed to only the observed cues.</p>
<fig id="pone-0037547-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037547.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Mean Estimation Cue Weight Evolution.</title>
<p>To measure the evolution of weights assigned to each visual cue we perform a linear regression of the position of each cue in the sequence to the measured trajectory, using data over all trajectories for each subject (see
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
). In this figure we illustrate the match between the empirically observed weights and the model predictions. (
<bold>A</bold>
)
<bold>Empirical Data Integration Window.</bold>
At each time-step in the trial we infer the weight assigned to each cue in the sequence. These weights define a window of cue integration which changes over time as evidence arrives. We plot the weights assigned to the cues seen so far (solid lines)
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e046.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SEM across subjects (shaded), omitting weights assigned to future cues for clarity (but see C and main text). Coloured arrows indicate the time-step at which the corresponding integration window applies. At all time steps we see that the observed cues are given approximately
<italic>equal</italic>
weight, with the exception of a 0.5 s time lag. This weight equality is indicative of optimal integration (as we see in E). (
<bold>B</bold>
)
<bold>Empirical Data Cue Evolution.</bold>
In an alternative visualisation of A we plot the weight allocated to each cue (solid line)
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e047.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SEM (shaded) as it evolves over the time-course of a trial. Each curve corresponds to the cue arising at the time marked by the corresponding coloured arrow. For clarity we do not show the weight allocated to the cue prior to it being seen (but see C and main text). This plot reveals that shortly after being seen, each cue’s weight suddenly increases as it contributes to the estimate, settling at a weight that is the same across all cues. These weight profiles are indicative of optimal integration (as we see in F). (
<bold>C</bold>
)
<bold>Empirical Weights.</bold>
The weight matrix
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e048.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, excluding the systematic component, captures the evolution of cue weights over time (see
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
). When visualised in this way, using colour to represent cue weight, we can see the initial response delay and the evolution of cue combination, as summarised in A and B. The regression method can not establish the cause of the initial 0.5 seconds of the trajectory, indicated by equal weights assigned to all cues (including future cues). This weight matrix is indicative of optimal integration (as we see for the optimal matrix
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e049.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in G) (
<bold>D</bold>
)
<bold>Empirical Systematic Bias.</bold>
In computing the regression of cue to decision we allow for a systematic component to capture the variability in the trajectory that is not explained by the cue weights. We observe empirically a non-zero systematic bias in the positive direction, especially for early time steps. Our optimal model predicts the initial bias (as we see in H), but the overall bias observed is sub-optimal. We believe this to be an unavoidable consequence of the configuration of the experiment (see text) (
<bold>E-F</bold>
)
<bold>Model Predictions</bold>
for comparison, with three parameters (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e050.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e051.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e052.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) optimised to minimise the difference between
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e053.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e054.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(plots C and G).</p>
</caption>
<graphic xlink:href="pone.0037547.g004"></graphic>
</fig>
<p>In
<xref ref-type="fig" rid="pone-0037547-g004">figure 4A</xref>
we plot the “integration window” at different times within the trial - this illustrates theweights assigned to all of the the observed cues at each time-step. We notice that each line is approximately horizontal, indicating that each cue contributes equal weight to the decision at each time step. In
<xref ref-type="fig" rid="pone-0037547-g004">figure 4B</xref>
we plot a curve for each cue to show how each cue’s weight rises after it has been seen, then gradually decays as more evidence arrives to share equal weight with the other cues. This can be visualised in
<italic>
<xref ref-type="supplementary-material" rid="pone.0037547.s006">Video S1</xref>
</italic>
.</p>
<p>The systematic component of the weight regression (
<xref ref-type="fig" rid="pone-0037547-g004">figure 4D</xref>
) reveals an initial bias of +20 pixels, but this subsides after 1 second. The large initial variability is due to the randomisation of the target location
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e055.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which subjects quickly navigate towards. A slight positive bias of around +6 pixels remains for the entire trajectory, which is also observed in trajectory data (
<xref ref-type="fig" rid="pone-0037547-g003">figure 3C</xref>
). The weight regression confirms that this is not a cue-driven error but indeed a systematic error.</p>
<p>We use the same regression method to plot the weight matrix for the ideal-observer model subject to kinematic constraints (see
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
). We find a close qualitative and quantitative match (
<xref ref-type="fig" rid="pone-0037547-g004">figure 4E–4H</xref>
), except that the model does not reveal an overall systematic bias.</p>
</sec>
<sec id="s2d">
<title>Continuous Estimation of the Uncertainty</title>
<p>Thus far we have analysed continuous mean estimation behaviour. In this section we analyse subjects’ ability to estimate sensory
<italic>uncertainty.</italic>
. In
<xref ref-type="fig" rid="pone-0037547-g005">figure 5</xref>
we compare the
<italic>objective error range</italic>
, equal to twice the mean absolute error (equation 3, in
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
), to the reported (
<italic>subjective</italic>
) confidence window. These quantities are identical for the ideal-observer.</p>
<fig id="pone-0037547-g005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037547.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Uncertainty Estimation Performance.</title>
<p>In this figure we show that subjects are able to discern the different levels of uncertainty added to the cues. (
<bold>A</bold>
)
<bold>Objective Uncertainty.</bold>
We plot the mean error range (twice the mean absolute deviation of the final mean estimate)
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e056.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SEM, for different levels of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e057.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(solid blobs and error bars), for perturbed (red) and unperturbed (blue) trials. In addition we overlay the average results for each subject (faded lines). Subjects show statistically significantly increased errors as a result of both cue uncertainty and the presence of perturbations. Between-subject variability is low, as indicated by the distinct separation between red and blue lines and the consistency of the gradient. (
<bold>B</bold>
)
<bold>Subjective Uncertainty.</bold>
We plot the average width of subject’s confidence window at the end of the trial for each
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e058.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and perturbation, similar to A. Subjects show a statistically significantly increased confidence window as a result of both cue uncertainty and the presence of perturbations, mimicking the objective uncertainty. However, between-subject variability is high, indicating that different subjects have widely differing abilities at estimating uncertainty. (
<bold>C</bold>
)
<bold>Subjective-Objective Mapping.</bold>
We combine per-subject data from A and B, plotting the mean error for each condition versus the confidence reported. The ideal mapping is shown by the dotted line. Subjects consistently over-estimate the objective uncertainty. (
<bold>D</bold>
)
<bold>Grouped Subjective-Objective Mapping.</bold>
We plot the average mapping across subjects
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e059.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the SEM in each direction. This demonstrates the consistency with which subjects over-estimate their objective uncertainty.</p>
</caption>
<graphic xlink:href="pone.0037547.g005"></graphic>
</fig>
<p>To assess the effect of task manipulations (objective uncertainty), we conducted an ANOVA on the objective error range with within-subject factors of perturbation (
<italic>unperturbed</italic>
vs
<italic>perturbed</italic>
, grouping over the perturbation conditions) and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e060.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<italic>low</italic>
,
<italic>medium</italic>
and
<italic>high</italic>
). This revealed a significant main effect of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e061.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e062.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e063.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), a significant main effect of perturbation (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e064.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e065.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), as well as a significant interaction between
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e066.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and perturbation (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e067.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e068.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). The interaction was expected since the perturbation magnitude is a fraction of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e069.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>To assess the subjective effect of task manipulations (perception of uncertainty), we also conducted an ANOVA on the confidence window range, with within-subject factors of perturbation (
<italic>unperturbed</italic>
vs
<italic>perturbed</italic>
, grouping over the perturbation conditions) and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e070.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<italic>low</italic>
,
<italic>medium</italic>
and
<italic>high</italic>
). This revealed a significant main effect of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e071.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e072.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e073.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), a significant main effect of perturbation (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e074.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e075.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), as well as a near-significant interaction between
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e076.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and perturbation (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e077.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e078.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). The magnitude of the interaction was less than expected.</p>
<p>The ANOVA results above indicate that the task manipulations have significant behavioural consequences, modulating both the objective uncertainty as well as perception of this uncertainty. We conducted t-tests to compute the differences between conditions, and found that unperturbed trials resulted in fewer errors than perturbed trials (measure:
<italic>objective error range</italic>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e079.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for all
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e080.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) which was reflected in increased confidence (measure:
<italic>confidence window</italic>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e081.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for all
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e082.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Likewise, the increase in error for
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e083.jpg" mimetype="image"></inline-graphic>
</inline-formula>
between low to medium and medium to high conditions (measure:
<italic>mean error range</italic>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e084.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for both perturbation conditions) were reflected by reduced confidence (measure:
<italic>confidence window</italic>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e085.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for both perturbation conditions).
<xref ref-type="fig" rid="pone-0037547-g005">Figures 5A and 5B</xref>
provide a graphical representation of these findings.</p>
<p>We consolidated
<xref ref-type="fig" rid="pone-0037547-g005">figures 5A and 5B</xref>
to examine the relationship between objective variability and subjective perception. In
<xref ref-type="fig" rid="pone-0037547-g005">figure 5C</xref>
we show the results per-subject, and see from the positive slope of each line that subjects were able to discriminate the level of sensory uncertainty in each condition, although with much variability across subjects. 96% of the data lies above the line
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e086.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, indicating that subjects’ confidence windows consistently over-estimate the objective variability. In
<xref ref-type="fig" rid="pone-0037547-g005">figure 5D</xref>
we show the average data across subjects.</p>
<p>We have seen above that subjective confidence can reliably discriminate perturbation-induced and variance-induced objective uncertainty at the end of the trial. This behaviour also holds for continuous confidence perception. In
<xref ref-type="fig" rid="pone-0037547-g006">figure 6A</xref>
we plot the average confidence estimate trajectories across subjects. The distinguishing features of the empirical trajectories are (i) trajectories are indistinguishable for the first 0.5 seconds, but then diverge; (ii) low, medium and high variance result in correspondingly-scaled confidence windows after divergence; (iii) sudden increases (inflexions) in confidence window occur as a result of early-, middle- and late-onset perturbations (arrows
<italic>b, c</italic>
and
<italic>d</italic>
); and (iv) final decisions vary with variance and perturbation onset (6B and 6C). From the interval of the standard error across subjects it is apparent that these phenomena are robust.</p>
<fig id="pone-0037547-g006" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037547.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Continuous uncertainty estimation data grouped across subjects.</title>
<p>This figure illustrates the quantitative match between the model and the data. (
<bold>A</bold>
)
<bold>Average Trajectories.</bold>
In this figure we show the average empirical trajectories across subjects compared to model predictions. Trajectories are computed for each subject by averaging over the trials for each condition. From top-to-bottom we plot the early, middle and late-onset perturbation conditions (indicated by shaded region), and from left-to-right we plot negative (blue), zero (purple) and positive (red) perturbation directions. The resultant trajectory for each
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e087.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(labelled) shows the mean across subjects
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e088.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SEM. The model fit to the data is shown using a dashed line. Note that the model does not explain the initial part of the trajectory (arrow
<italic>a</italic>
), but does reasonably well at explaining the timing of deviations in uncertainty perception that arise as a consequence of perturbations (arrows
<italic>b</italic>
,
<italic>c</italic>
and
<italic>d</italic>
) (
<bold>B and C</bold>
)
<bold>Confidence Reported.</bold>
For each of the experiment conditions we show how the endpoint subjective uncertainty is a predictable function of variance
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e089.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, perturbation magnitude
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e090.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and block
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e091.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. We plot the same results grouped in different ways for comparison. The model makes a good quantitative fit for all conditions, but note that the model contains a systematic
<italic>safety margin</italic>
parameter
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e092.jpg" mimetype="image"></inline-graphic>
</inline-formula>
which may explain some aspects of the data fit (see text).</p>
</caption>
<graphic xlink:href="pone.0037547.g006"></graphic>
</fig>
<p>We devised a kinematic model to account for these observations (see
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
and
<xref ref-type="fig" rid="pone-0037547-g007">figure 7</xref>
). The modelled observer optimally integrates the deviations of cues from the current mean estimate so as to maximise the expected reward (which is achieved when the confidence window equals one standard deviation of the objective uncertainty either side of the sample mean). Similar to the previous analysis for mean estimation, we maintain the three parameters of sensorimotor latency,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e093.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, maximum speed,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e094.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and momentum
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e095.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Owing to the consistent over-estimation of uncertainty discussed previously, we include an additional safety margin parameter,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e096.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<fig id="pone-0037547-g007" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037547.g007</object-id>
<label>Figure 7</label>
<caption>
<title>Model of Sensorimotor Kinematics.</title>
<p>In order to explain subject’s evolving trajectories over time we model the inevitable kinematic constraints on movement. In the model we assume that, other than these limitations, subjects will behave as ideal observers. We discretise movement into 50 ms time-steps. At time-step
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e097.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, for an observer aiming to reach a target
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e098.jpg" mimetype="image"></inline-graphic>
</inline-formula>
they make a displacement of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e099.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, moving them from position
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e100.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e101.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This figure illustrates the parameters of the model. (
<bold>A</bold>
)
<bold>Bias and Delay.</bold>
We assume that there is some delay,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e102.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, before subjects initiate their movement. This captures sensory, processing and motor delays. Subjects may also have some inherent bias in one direction or another, due to the configuration of the experiment or otherwise, so we introduce a bias parameter
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e103.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. (
<bold>B</bold>
)
<bold>Speed Constraint.</bold>
We assign a maximum speed,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e104.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, to limit the displacement in a given time step. (
<bold>C</bold>
)
<bold>Momentum Constraint.</bold>
We assume that subjects can not accelerate instantaneously by introducing a smoothing parameter
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e105.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e106.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
</caption>
<graphic xlink:href="pone.0037547.g007"></graphic>
</fig>
<p>This model accounts both qualitatively and quantitatively for the key features of the empirical data, such as the magnitude and shape of variance-induced differences, the magnitude and timing of perturbation-induced inflexions, and the magnitude of the final decision for each condition. The per-subject model parameters were optimised to ensure the best possible fit to the data, but nevertheless have no capacity to explain the within-condition effects of perturbations or variance. While the safety margin parameter
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e107.jpg" mimetype="image"></inline-graphic>
</inline-formula>
does have the capacity to explain the overall magnitude of decisions, it is simply a per-subject constant and can not explain the differences between the trajectories (see
<italic>
<xref ref-type="sec" rid="s3">Discussion</xref>
</italic>
).</p>
<p>It is interesting to note that the increase in perceived uncertainty resulting from cue perturbations (
<xref ref-type="fig" rid="pone-0037547-g006">figure 6A</xref>
, arrows
<italic>b</italic>
,
<italic>c</italic>
and
<italic>d</italic>
), occur at the same time as mean estimation changes of direction (
<xref ref-type="fig" rid="pone-0037547-g003">figure 3A</xref>
arrows
<italic>c</italic>
,
<italic>d</italic>
and
<italic>g</italic>
). The mean estimation and uncertainty estimation tasks appear to be coupled, though our model treats them separately. This could explain the initial discrepancy between our model and the data (
<xref ref-type="fig" rid="pone-0037547-g006">figure 6A</xref>
, arrow
<italic>a</italic>
): presumably subjects do not adjust their confidence window until they have first navigated toward the target (after about 1 second). The model provides a good fit to the remainder of the trajectory.</p>
<p>In computing the weight matrix to explain the evolution of cue weights (see
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
) we find that the empirical weights do not reflect optimal performance (see
<italic>
<xref ref-type="supplementary-material" rid="pone.0037547.s003">Figure S3</xref>
</italic>
). We see empirically that each cue deviation contributes to the final decision, but the resultant weight profiles are noisy and difficult to interpret. This may indicate that subjects are sub-optimal at estimating uncertainty from time-evolving visual cues (but see
<italic>
<xref ref-type="sec" rid="s3">Discussion</xref>
</italic>
for alternative interpretations).</p>
</sec>
</sec>
<sec id="s3">
<title>Discussion</title>
<p>We have shown that subjects estimate the mean of time-varying stimuli in a predictable manner. By manipulating the variance as well as the onset and direction of perturbations we have shown that this estimate is computed in a statistically-principled way that assigns equal weight to all observed cues to form a final estimate. We devised an ideal-observer model that is subjected to kinematic constraints. We find a close match between the empirical data and our statistically-optimal model, suggesting that subjects can accumulate evidence over time to form
<italic>optimal continuous estimates</italic>
of the mean of noisy visual stimuli.</p>
<p>By manipulating the variance of the underlying stimuli we examined the relationship between
<italic>objective uncertainty</italic>
and
<italic>subjective uncertainty</italic>
, showing that the two are closely, but not directly coupled. By manipulating subsets of the cues through perturbations we also evaluated the respective weighting given to each cue for confidence estimation, and showed that, with the addition of a conservative
<italic>safety-margin,</italic>
we can reliably predict responses to cue variance and perturbations. While the evolution of cue weights was not well explained by our model, possibly indicative of sub-optimal integration, subjects were clearly capable of accumulating evidence over time to continuously discriminate different levels of uncertainty due to to cue variance and cue perturbations.</p>
<p>In making decisions, subjects must make a trade-off between allocating time to perception, and time to action
<xref ref-type="bibr" rid="pone.0037547-Faisal1">[2]</xref>
. Since there is a considerable time delay between sensing the world and initiating motor actions, subjects often make decisions while sensory information is arriving. Discrete events (such as subjects “changing their mind”) may be based on the time-delayed accumulation of evidence
<xref ref-type="bibr" rid="pone.0037547-Resulaj1">[21]</xref>
. In this paper we show how subjects form decisions based on visual cues and update their estimate as evidence arrives, as indicated by deviations in trajectories under different levels of perturbation. In our continuous task these inflexions are not discrete “changes of mind” but in fact continuous decisions related to the subject’s evolving perception of uncertainty.</p>
<p>The approach presented in this paper utilises a continuous time-varying task, providing a window into the processes of mean and uncertainty acquisition. The modulation of uncertainty in alternative designs, such as the two-interval forced-choice paradigm, may induce “apprehension” in proportion to the imposed uncertainty
<xref ref-type="bibr" rid="pone.0037547-Helbig2">[19]</xref>
, which may indirectly provide a measure of stimulus uncertainty that does not require an explicit representation of uncertainty
<xref ref-type="bibr" rid="pone.0037547-Helbig2">[19]</xref>
. Experimental manipulations to increase uncertainty, such as decreasing stimulus contrast or adding uncorrelated noise, may increase the latency with which subjects can react to stimuli, again providing interpretations absent of explicit uncertainty awareness. Even our method of time-varying jittering cues may trigger mechanisms that could indirectly account for uncertainty judgements. It has therefore been argued that much research on statistical optimality includes situations in which an
<italic>implicit</italic>
internal representation of uncertainty may explain task performance (see
<xref ref-type="bibr" rid="pone.0037547-Helbig2">[19]</xref>
and e.g.
<xref ref-type="bibr" rid="pone.0037547-Hillis1">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Ernst1">[6]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Knill1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Alais1">[18]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Gepshtein1">[22]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Gepshtein2">[23]</xref>
). However, by asking subjects to report their uncertainty one can directly tackle the question of whether subjects can
<italic>explicitly</italic>
acquire representations of sensory uncertainty, applicable to reaching tasks
<xref ref-type="bibr" rid="pone.0037547-Graf1">[16]</xref>
, numerical estimation tasks
<xref ref-type="bibr" rid="pone.0037547-Nassar1">[24]</xref>
and visual perception tasks
<xref ref-type="bibr" rid="pone.0037547-Barthelm1">[15]</xref>
. In this paper we have extended this idea further to consider the
<italic>continuous</italic>
estimation of uncertainty as evidence arrives.</p>
<p>To what extent are the observed continuous trajectories optimal? The global parameters of the model are optimised to achieve the best fit for each subject, but as these parameters are fixed across all trials they can not explain the differences in the trajectories observed for each condition - these can only be explained by the contribution of individual cues to the decisions (although the parameters can explain the general shape of the trajectories and the latency after which cues contribute to the trajectories). In the mean estimation model the cue contributions are chosen optimally (i.e. according to the ML estimate of the mean). The resultant close match between the empirical and observed trajectories for each of the conditions indicates optimal cue weighting. In contrast, in the confidence estimation model a suboptimal “safety margin” is used to explain the magnitude of the estimate and thus a match between empirical and model trajectories does not indicate optimality. This safety margin causes subjects to significantly over-estimate uncertainty, resulting in less than optimal performance in the task.</p>
<p>Could the finding of optimal mean estimation and suboptimal confidence estimation be explained by subjects relying on a simpler heuristic? For example, subjects may position their thumb and forefinger on the extremes of the cues seen so far, or choose an aperture size proportional to this range. This was our primary motivation for computing the weights assigned to each cue in the sequence, which revealed that each cue was approximately equally weighted for the mean-estimation task. This would not be the case for subjects relying on subsets of the cues: as the mean of the cues is not equal to the median due to perturbations, the suboptimal heuristic strategies would result in different endpoint decisions, different trajectories and different weight profiles. We therefore posit that mean estimation trajectories are indeed based on optimal cue weighting. In contrast, uncertainty estimation empirical weights do not match the optimal model weights. The presence of a consistent overestimation of uncertainty indicates that subjects may be relying on a subset of the observed cues to form their estimate. Nonetheless, subjects still increase their aperture in response to uncertainty increases and perturbations, indicating that subjects do have access to some measure of their objective uncertainty.</p>
<p>A number of studies have observed underconfidence in forced-choice tasks (e.g. see
<xref ref-type="bibr" rid="pone.0037547-Barthelm1">[15]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Bjrkman1">[25]</xref>
), consistent with the present finding of subjective overestimation of objective uncertainty. In a recent study in which subjects were asked to report a confidence window when predicting the magnitude of random samples from a time-varying distribution, subjects showed perceptual biases when estimating the uncertainty
<xref ref-type="bibr" rid="pone.0037547-Nassar1">[24]</xref>
. This was attributed to a pre-learned bias and was otherwise consistent with a Bayesian observer model, although could equally be explained by an inability to accurately gauge the magnitude of the uncertainty, as in the present study.</p>
<p>In addition to the possibility of suboptimal uncertainty estimation, from the present results there are a number of alternative potential causes of over-estimated uncertainty: (i) It is not known if subjects fixate on the jittering stimuli or on the cursor, which may effect their ability to accurately judge (or anticipate) the stimulus location (see
<xref ref-type="bibr" rid="pone.0037547-Brenner1">[26]</xref>
); (ii) Subjects may not have been able to maximise their expected gain (in contrast to
<xref ref-type="bibr" rid="pone.0037547-Trommershuser1">[27]</xref>
), due to differences in experimental design; (iii) The kinematic model fit to the data may be insufficient to describe behaviour; (iv) The data collected may have been too noisy for reliable model fitting. To address points (i) and (ii) further research is needed to decouple the factors that determine objective variability and performance maximisation. For example, subjects were not aware of the exact functional form of the score function (in contrast to
<xref ref-type="bibr" rid="pone.0037547-Trommershuser1">[27]</xref>
) adding additional learning demands. Whilst the effects of learning were not observed in the data these potential limitations of the scoring system should be noted. To address points (iii) and (iv) we must evaluate the viability of our kinematic model (See
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
, and
<xref ref-type="fig" rid="pone-0037547-g007">figure 7</xref>
). In our model the delay parameter captures the combined effect of sensory and motor latency and motor kinematic limitations are captured by speed and momentum parameters, which affect the overall shape of the trajectories. It was found that these three parameters were sufficient to explain the average empirical data for mean estimation. Alternative models may introduce additional parameters to explain different aspects of the data, such as the addition of sensory and motor noise or separate sensory and motor delays. Further experiments would be required to test such models.</p>
<p>Our experiment design utilised a grasping task within a fixed plane. As the task does not abstract the cursor or targets to a computer screen, it maintains many aspects of ordinary grasping (visual feedback, proprioceptive feedback, feedforward control etc.), keeping the task as natural as possible. As detailed in the methods, feedback of the fingers was aligned with the true finger locations (see
<xref ref-type="bibr" rid="pone.0037547-Gepshtein1">[22]</xref>
). The design relied on the fact that subjects could independently control their grasp aperture and hand position, which we felt was likely (although see
<xref ref-type="bibr" rid="pone.0037547-Schot1">[28]</xref>
; independent finger and thumb control has not been conclusively demonstrated). Target stimuli were presented along the line of the left forearm, though it could also have been achieved by presenting stimuli along any fixed line in the plane. We chose to use the arm as a reference because (i) this design lends itself to a number of follow-up experiments in which the cues may be tactile rather than visual; and (ii) it allows subjects to position both the target line (with their left arm) and the cursor (with their right arm) in any comfortable configuration of their choosing.</p>
<p>Our results are consistent with a number of studies that report optimal multisensory integration, (e.g. audio-visual
<xref ref-type="bibr" rid="pone.0037547-Heron1">[7]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Alais1">[18]</xref>
, visuo-haptic
<xref ref-type="bibr" rid="pone.0037547-Ernst1">[6]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Helbig2">[19]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Gepshtein1">[22]</xref>
visuo-proprioceptive
<xref ref-type="bibr" rid="pone.0037547-vanBeers1">[29]</xref>
and visual
<xref ref-type="bibr" rid="pone.0037547-Jacobs1">[4]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Oru1">[30]</xref>
integration). However, these results provide
<italic>indirect</italic>
evidence of subjective representation of objective uncertainty
<xref ref-type="bibr" rid="pone.0037547-Barthelm1">[15]</xref>
. In the present study we find that subjects are able to form an optimal estimate of the mean and an overestimate of the uncertainty, providing
<italic>direct</italic>
evidence of continuous mean- and confidence-estimation mechanisms that may underlie the observation of optimal integration. In contrast, there are a number of studies in which optimal behaviour was not observed. Multisensory integration studies have demonstrated a significant under-weighting of sensory uncertainty for texture information
<xref ref-type="bibr" rid="pone.0037547-Knill1">[10]</xref>
and auditory information
<xref ref-type="bibr" rid="pone.0037547-Burr1">[13]</xref>
and in a third study it was found that visuo-haptic integration performance was inconsistent with maximum likelihood estimation in more than 80% of the data
<xref ref-type="bibr" rid="pone.0037547-Rosas1">[11]</xref>
. However, the authors conceded that subjects may have attempted to combine cues optimally but did not have an accurate estimate of the variance of the individual cues. Consistent with this finding, in the present study we have observed a suboptimal
<italic>safety-margin</italic>
in subjects estimating their uncertainty. By extending our experimental paradigm to multiple sensory modalities we would predict different integration weights for subjects using either
<italic>subjective</italic>
or
<italic>objective</italic>
uncertainty to form multimodal estimates. By allowing for simultaneous measurement of mean and confidence our experimental paradigm readily lends itself to the testing of such hypotheses.</p>
<p>There is a growing body of research which aims to understand the neural substrate of uncertainty representation. For example, neural firing activity in orbitofrontal cortex in rats is an accurate predictor of olfactory discrimination uncertainty
<xref ref-type="bibr" rid="pone.0037547-Kepecs1">[17]</xref>
, and neurons in parietal cortex encode information about the degree of decision-making uncertainty in monkeys
<xref ref-type="bibr" rid="pone.0037547-Kiani1">[31]</xref>
. The presence of confidence-estimation mechanisms in the brain is supported by biologically plausible computational models (such as reviewed in
<xref ref-type="bibr" rid="pone.0037547-Pouget1">[32]</xref>
) in which neural populations readily encode sensory uncertainty and allow networks to compute posterior probability distributions. The results presented in this paper provide direct evidence that humans have rapid and reliable access to statistical information available from stimuli, which could presumably be attained from such neural representations.</p>
<sec id="s3a">
<title>Conclusion</title>
<p>Our quantitative paradigm allows us to simultaneously measure mean and confidence estimation ability. It allows us to observe these processes over time as we control the arrival of evidence. We are able to make qualitative and quantitative predictions of the performance of subjects based on a statistically optimal model constrained only by elementary kinematic limitations. The paradigm naturally lends itself to a wide variety of future experimental manipulations, for example in understanding the methods deployed when integrating cues from multiple modalities, for understanding the time-courses of decisions, and for decoupling the roles of objective and subjective uncertainty perception for decision-making.</p>
</sec>
</sec>
<sec sec-type="materials|methods" id="s4">
<title>Materials and Methods</title>
<sec id="s4a">
<title>Experimental Methodology</title>
<sec id="s4a1">
<title>Subjects and ethics</title>
<p>14 volunteers participated in this experimental study. All subjects were healthy, right-handed and aged between 21 and 30. All of the subjects were naive to the experimental manipulations and the experiment apparatus. The experimental protocols were in accordance with the University of Edinburgh School of Informatics policy statement on the use of humans in experiments. Subjects gave informed consent before participation in the study and received financial compensation for their time (approximately 90 minutes per subject).</p>
</sec>
<sec id="s4a2">
<title>Apparatus</title>
<p>Subjects were instructed to place their left forearm under a horizontal mirror onto an array of tactile markers, serving as a tactile reference frame consistent and veridical with the visual display. Using the rear-projection mirror setup as illustrated in
<xref ref-type="fig" rid="pone-0037547-g001">figure 1A</xref>
, visual feedback was given in the plane of the arm so that feedback of the arm and finger locations aligned with the true finger and arm locations, removing any confounding effects of mismatch between visual and proprioceptive cues (as discussed in
<xref ref-type="bibr" rid="pone.0037547-Gepshtein2">[23]</xref>
). The use of the left arm as a reference frame allowed subjects to position themselves comfortably. Further, this setup lends itself naturally to an alternative version of the task in which stimuli are tactile rather than visual (see
<italic>
<xref ref-type="sec" rid="s3">Discussion</xref>
</italic>
).</p>
<p>Stimuli were anti-aliased and projected using a high resolution video projector with latency
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e108.jpg" mimetype="image"></inline-graphic>
</inline-formula>
ms. 1 projected pixel corresponded to approximately 0.3 mm on the arm.</p>
<p>To enable accurate 3-D tracking of the arm and fingertips we used a Polhemus Liberty 240 Hz 8-sensor motion tracking system (POLHEMUS, USA). Every 50 ms we sampled the arm and fingertip positions and logged data using custom personal computer (PC) software. The same PC software was responsible for displaying and logging the stimuli, ensuring that our data and stimuli were temporally calibrated.</p>
</sec>
<sec id="s4a3">
<title>Task 1: mean estimation</title>
<p>In Task 1 subjects were instructed to indicate the
<italic>mean</italic>
of a sequence of visual stimuli, using a fixed-aperture cursor (
<xref ref-type="fig" rid="pone-0037547-g001">figure 1D</xref>
,
<italic>left</italic>
). The cursor location was computed as the mean of the orthogonal projections of the thumb and forefinger position vectors onto the forearm.</p>
<p>Each subject underwent an initial training period to become familiar with the task (phase
<italic>1A</italic>
), followed by a block of trials to assess mean estimation performance as we varied the visual uncertainty,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e109.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, from trial-to-trial (phase
<italic>1B</italic>
).</p>
</sec>
<sec id="s4a4">
<title>Task 2: mean and confidence estimation</title>
<p>In Task 2 subjects were instructed to indicate the
<italic>range</italic>
in which they believed the mean to lie, using a variable aperture cursor (
<xref ref-type="fig" rid="pone-0037547-g001">figure 1D</xref>
,
<italic>right</italic>
) determined by orthogonal projections onto the arm of their thumb and forefinger. The average position of the projections was interpreted as their mean estimate and the range as their confidence in this estimate.</p>
<p>Again, each subject underwent an initial training period to familiarise them with the task (phase
<italic>2A</italic>
), followed by a larger block of trials to assess their combined mean and uncertainty estimation performance as we varied
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e110.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e111.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e112.jpg" mimetype="image"></inline-graphic>
</inline-formula>
from trial-to-trial (phase
<italic>2B</italic>
).</p>
</sec>
<sec id="s4a5">
<title>Task manipulations</title>
<p>We manipulated the distributions of the stimuli from trial-to-trial in two ways: (i) we modulated the variance of the visual cues (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e113.jpg" mimetype="image"></inline-graphic>
</inline-formula>
pixels, which we term
<italic>low</italic>
,
<italic>medium</italic>
and
<italic>high</italic>
uncertainty respectively); and (ii) we added perturbations (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e114.jpg" mimetype="image"></inline-graphic>
</inline-formula>
termed
<italic>negative</italic>
,
<italic>neutral</italic>
and
<italic>positive</italic>
respectively) to subsets of the cues (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e115.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, termed
<italic>early</italic>
,
<italic>middle</italic>
and
<italic>late</italic>
respectively).
<xref ref-type="table" rid="pone-0037547-t001">Table 1</xref>
summarises the use of these manipulations.</p>
<table-wrap id="pone-0037547-t001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037547.t001</object-id>
<label>Table 1</label>
<caption>
<title>Experiment Structure.</title>
</caption>
<alternatives>
<graphic id="pone-0037547-t001-1" xlink:href="pone.0037547.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td colspan="2" align="left" rowspan="1">Structure</td>
<td colspan="4" align="left" rowspan="1">Configuration</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4–5 7–10 Task</td>
<td align="left" rowspan="1" colspan="1">Phase</td>
<td align="left" rowspan="1" colspan="1">Sessions</td>
<td align="left" rowspan="1" colspan="1">Trials</td>
<td align="left" rowspan="1" colspan="1">
<italic>N
<sub>σ</sub>
</italic>
</td>
<td align="left" rowspan="1" colspan="1">
<italic>N
<sub>b</sub>
</italic>
</td>
<td align="left" rowspan="1" colspan="1">
<italic>N
<sub>p</sub>
</italic>
</td>
<td align="left" rowspan="1" colspan="1">
<italic>N
<sub>r</sub>
</italic>
</td>
<td align="left" rowspan="1" colspan="1">Total</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">A</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">15</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">55</td>
<td align="left" rowspan="1" colspan="1">55</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">135</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">B</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">15</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">55</td>
<td align="left" rowspan="1" colspan="1">55</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">135</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">A</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">15</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">55</td>
<td align="left" rowspan="1" colspan="1">55</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">180</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">B</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">15</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">135</td>
<td align="left" rowspan="1" colspan="1">540</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>Each subject performed 990 trials in total across four experimental phases. Task 1 examined subjects’ ability to estimate the
<italic>mean</italic>
of a jittering visual cursor, split into a training phase (
<italic>1A</italic>
) and a test phase (
<italic>1B</italic>
). In Task 2 we examined the subject’s ability to report their
<italic>confidence</italic>
in this estimate in addition to reporting the mean, again with a training phase (
<italic>2A</italic>
) and a test phase (
<italic>2B</italic>
). Subjects performed several sessions in each phase to improve data integrity. On each trial we presented 15 cues, distributed pseudo-randomly with variance
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e116.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and split the trial into blocks, perturbing a given block
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e117.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in direction
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e118.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. We examined
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e119.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e120.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e121.jpg" mimetype="image"></inline-graphic>
</inline-formula>
levels of each of these manipulations respectively, listed in the table. We also included
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e122.jpg" mimetype="image"></inline-graphic>
</inline-formula>
trials of random duration (between 5 and 15 cues in length). For each configuration subjects performed 15 trials. All sessions and trials were randomly shuffled within a phase.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>In order to interpret the subtle effects of these manipulations robustly we used sets of pseudo-random cue sequences which were counterbalanced across 15 trials for each manipulation (see
<italic>Visual Stimuli</italic>
). Subjects completed several sessions each with different sets of cue sequences. The order of all trials and sessions were randomised, and on every trial the target location,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e123.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, was chosen at random.</p>
<p>In Task 2 we also added trials of shorter durations (durations randomly chosen in the range 5 to 15 cues). One-sixth of trial were of this nature, but these trials did not contribute to our analyses. They were included to ensure subjects would not be able to predict when each trial was going to end, encouraging continuous behaviour.</p>
</sec>
<sec id="s4a6">
<title>Performance feedback</title>
<p>In Task 1, 10 points were awarded if the trial was successful. To motivate subjects in Task 2, subjects were awarded less points if their chosen confidence interval was greater than the
<italic>expected objective uncertainty</italic>
determined in Task 1, encouraging them to estimate and report their objective uncertainty. We exploit the finding that subjects can learn to maximise expected reward
<xref ref-type="bibr" rid="pone.0037547-Trommershuser1">[27]</xref>
.</p>
<p>On a given trial
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e124.jpg" mimetype="image"></inline-graphic>
</inline-formula>
let the measured cursor position and width be given by be
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e125.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e126.jpg" mimetype="image"></inline-graphic>
</inline-formula>
respectively, recorded over the trial duration (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e127.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). On completion of a trial we assume
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e128.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represents a subject’s internal estimate of the mean,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e129.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e130.jpg" mimetype="image"></inline-graphic>
</inline-formula>
their internal estimate of the confidence interval,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e131.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. We use a score function,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e132.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which assigns a score according to success or failure:
<disp-formula>
<graphic xlink:href="pone.0037547.e133"></graphic>
<label>(1)</label>
</disp-formula>
Where successful trials are rewarded according to</p>
<p>
<disp-formula>
<graphic xlink:href="pone.0037547.e134"></graphic>
<label>(2)</label>
</disp-formula>
The reward function penalises apertures larger than
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e135.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. In our experiment,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e136.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is calculated for each subject based on the data empirically observed in experiment phase
<italic>1B</italic>
. We first compute the
<italic>objective error</italic>
as the mean absolute endpoint deviation for each
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e137.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, denoted by
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e138.jpg" mimetype="image"></inline-graphic>
</inline-formula>
:
<disp-formula>
<graphic xlink:href="pone.0037547.e139"></graphic>
<label>(3)</label>
</disp-formula>
</p>
<p>We then define an
<italic>objective error function</italic>
for each subject,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e140.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, determined by the linear mapping between
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e141.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e142.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. On a given trial in Task 2 we compute the standard deviation of the cues,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e143.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and use the objective error function to determine
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e144.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, twice the expected objective error:
<disp-formula>
<graphic xlink:href="pone.0037547.e145"></graphic>
<label>(4)</label>
</disp-formula>
</p>
<p>The target aperture size for the confidence estimation task could have been chosen to be any quantity proportional to the objective variability. Regardless of the choice of target aperture size, subjects are required to learn the mapping from stimulus to confidence interval in order to succeed at the task. It was an assumption of our approach that this could be done so as to maximise the expected score (as per
<xref ref-type="bibr" rid="pone.0037547-Trommershuser1">[27]</xref>
). We decided to set to the target aperture size to be the range of values that form approximately one standard deviation of the objective variability on either side of the mean.</p>
<p>If subjects pick an aperture smaller than
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e146.jpg" mimetype="image"></inline-graphic>
</inline-formula>
this decreases the probability of success, while an aperture larger than
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e147.jpg" mimetype="image"></inline-graphic>
</inline-formula>
decreases the score. The reward function in equation 2 ensures that the overall maximum expected reward is achieved by choosing an aperture of exactly
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e148.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This method, therefore, encourages subjects to estimate their own
<italic>objective error range.</italic>
Further details can be found in
<xref ref-type="bibr" rid="pone.0037547-Saunders1">[33]</xref>
.</p>
</sec>
<sec id="s4a7">
<title>Visual stimuli</title>
<p>
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e149.jpg" mimetype="image"></inline-graphic>
</inline-formula>
visual cues are presented in each trial. For mathematical convenience we describe the visual cues as a sequence of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e150.jpg" mimetype="image"></inline-graphic>
</inline-formula>
locations
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e151.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where each
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e152.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is drawn from an underlying distribution with mean
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e153.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and variance
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e154.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Each visual cue is presented for 250 ms.</p>
<p>Each visual cue comprises 5 frames. On each frame for the
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e155.jpg" mimetype="image"></inline-graphic>
</inline-formula>
th cue we generate a cloud of ten random blobs distributed with a standard deviation of 10 pixels in horizontal and vertical directions and centred at
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e156.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Each blob is a low-contrast 2-D Gaussian of radius 8 pixels (based on
<xref ref-type="bibr" rid="pone.0037547-Alais1">[18]</xref>
). Blob-clouds provide a way to modulate the underlying difficulty of the task, but in this experiment we did not modulate the cloud parameters.</p>
<p>Each
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e157.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is chosen according to shuffled pseudo-Normal cue sequence
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e158.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(generated by taking uniformly-spaced samples from the inverse cumulative Normal distribution, then shuffled). We devised an algorithm (illustrated in
<italic>
<xref ref-type="supplementary-material" rid="pone.0037547.s002">Figure S2</xref>
</italic>
and described in further detail in
<italic>
<xref ref-type="supplementary-material" rid="pone.0037547.s004">Text S1</xref>
</italic>
) to generate a matrix of cue indices
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e159.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, with 15 columns (one for each trial,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e160.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and 15 rows (one for each cue
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e161.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Each entry of the matrix
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e162.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is an index into
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e163.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, shuffled by our algorithm so as to maximise the unpredictability of each trial while removing uncontrolled sources of uncertainty.</p>
<p>To generate each
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e164.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on a given trial
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e165.jpg" mimetype="image"></inline-graphic>
</inline-formula>
we use
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e166.jpg" mimetype="image"></inline-graphic>
</inline-formula>
as an index into
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e167.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, then add spatial uncertainty by multiplying by
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e168.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and induce perturbations by shifting the mean of one third of the cues by
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e169.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and the remaining cues by
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e170.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. We vary
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e171.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e172.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e173.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and the random target location,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e174.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, for each trial. Hence, we have
<disp-formula>
<graphic xlink:href="pone.0037547.e175"></graphic>
<label>(5)</label>
</disp-formula>
</p>
<p>This is repeated using the same
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e176.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e177.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for all experimental configurations. Subjects complete multiple sessions for each phase of the experiment using some or all of the above manipulations as previously discussed. Each session uses different instantiations of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e178.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e179.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and all sessions within each phase of the experiment are shuffled. Each subject receives different instantiations of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e180.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e181.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
</sec>
</sec>
<sec id="s4b">
<title>Data Analysis</title>
<sec id="s4b1">
<title>The ideal observer</title>
<p>During a trial, as samples accumulate we would expect an
<italic>ideal observer</italic>
to accurately estimate the sample mean and sample variance of the cues thus far seen and make decisions based on this available evidence. Given
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e182.jpg" mimetype="image"></inline-graphic>
</inline-formula>
cues
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e183.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the unbiased sample mean and sample variance are given by equations 6 and 7:
<disp-formula>
<graphic xlink:href="pone.0037547.e184"></graphic>
<label>(6)</label>
</disp-formula>
<disp-formula>
<graphic xlink:href="pone.0037547.e185"></graphic>
<label>(7)</label>
</disp-formula>
</p>
<p>In Task 1 the observer’s ideal strategy is to select
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e186.jpg" mimetype="image"></inline-graphic>
</inline-formula>
at time
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e187.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>One can show that the variance of the sample mean estimator is given by
<disp-formula>
<graphic xlink:href="pone.0037547.e188"></graphic>
<label>(8)</label>
</disp-formula>
</p>
<p>Thus, in Task 2 the ideal observer strategy at time
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e189.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is to select a confidence interval equal to
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e190.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which is equal to the ideal-observer objective error range (as described in
<italic>Performance Feedback</italic>
).</p>
</sec>
<sec id="s4b2">
<title>Sensorimotor delay model</title>
<p>The ideal observer can perform instantaneous computations and act on sensory information immediately, but human beings can not. In the presence of inevitable sensory, processing and motor delays and noise we consider how the ideal observer would now perform. We define an ideal-observer model constrained by the three global parameters,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e191.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e192.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e193.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, capturing natural kinematic constraints on hand motion.</p>
<p>Suppose the observer has witnessed
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e194.jpg" mimetype="image"></inline-graphic>
</inline-formula>
cues by time
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e195.jpg" mimetype="image"></inline-graphic>
</inline-formula>
due to sensory delays. We introduce modified estimates of mean and variance from equations 6 and 7:
<disp-formula>
<graphic xlink:href="pone.0037547.e196"></graphic>
<label>(9)</label>
</disp-formula>
<disp-formula>
<graphic xlink:href="pone.0037547.e197"></graphic>
<label>(10)</label>
</disp-formula>
</p>
<p>In Task 1 subjects can compute
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e198.jpg" mimetype="image"></inline-graphic>
</inline-formula>
from equation 9 to form a time-delayed internal estimate of the mean.</p>
<p>In Task 2 we expect subjects to estimate their objective uncertainty. From equation 10 the ideal-observer can calculate the time-delayed variance estimate
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e199.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which is translated into an objective error range
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e200.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(using the linear
<italic>objective error function</italic>
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e201.jpg" mimetype="image"></inline-graphic>
</inline-formula>
defined previously; see equation 4) to achieve the maximum possible score.</p>
<p>In addition to sensory delays we introduce motion constraints. At time
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e202.jpg" mimetype="image"></inline-graphic>
</inline-formula>
let us define the
<italic>reported estimate</italic>
(i.e. the position of the cursor) as
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e203.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and the
<italic>perceived estimate</italic>
(i.e. our time-delayed internal estimate of the mean) as
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e204.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. In our formulation we model the observer as making discrete steps of size
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e205.jpg" mimetype="image"></inline-graphic>
</inline-formula>
so that the reported estimate smoothly converges to the perceived estimate. The model constrains motion using two parameters: a maximum speed parameter,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e206.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, constrains the maximum displacement made by the observer in a given time-step; and a momentum parameter,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e207.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, prevents sudden speed changes by smoothing these displacements over time. i.e.
<disp-formula>
<graphic xlink:href="pone.0037547.e208"></graphic>
<label>(11)</label>
</disp-formula>
<disp-formula>
<graphic xlink:href="pone.0037547.e209"></graphic>
<label>(12)</label>
</disp-formula>
<disp-formula>
<graphic xlink:href="pone.0037547.e210"></graphic>
<label>(13)</label>
</disp-formula>
</p>
<p>Note that the model applies to both mean and confidence judgements: For Task 1 we set
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e211.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and for Task 2 we replace
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e212.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e213.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(the width of the cursor at time
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e214.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and set
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e215.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>We add one additional parameter to the confidence estimation model, a
<italic>bias</italic>
term,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e216.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. In equation 12 this replaces the term
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e217.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e218.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This can be thought of as a safety margin or constant systematic error. This is considered a suboptimal component of the model, while the other parameters capture natural kinematic limitations.</p>
<p>
<xref ref-type="fig" rid="pone-0037547-g007">Figure 7</xref>
illustrates the effect of each of these parameters on model trajectories.</p>
</sec>
<sec id="s4b3">
<title>Weight regression</title>
<p>To compute the contribution of each cue in the trial to the empirical trajectory observed we perform a multiple linear regression at each time-step using the non-negative weight least-squares algorithm described in
<xref ref-type="bibr" rid="pone.0037547-Lawson1">[34]</xref>
. For full details see
<italic>
<xref ref-type="supplementary-material" rid="pone.0037547.s005">Text S2</xref>
</italic>
.</p>
<p>For Task 1 we regress the pixel
<italic>location</italic>
of the cues in the trial (including those not yet seen), plus an additional constant, to the
<italic>position</italic>
of the cursor at that time.</p>
<p>For Task 2 we regress the
<italic>absolute deviation</italic>
of the pixel locations of the cues in the trial from the sample mean, plus an additional constant, to the
<italic>width</italic>
of the cursor at that time.</p>
</sec>
<sec id="s4b4">
<title>Model parameter learning</title>
<p>Our model (see
<italic>Sensorimotor Delay Model</italic>
) has relatively few parameters. We optimise these parameters to achieve the best fit to the data, but note that this process does not confound our claims. The model does not modify the magnitude of weights assigned to each cue, it merely constrains the trajectory through which a decision manifests itself.</p>
<p>Using the same weight regression technique for the model data we compute a parametrised weight matrix
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e219.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. For full details see
<italic>
<xref ref-type="supplementary-material" rid="pone.0037547.s005">Text S2</xref>
</italic>
. We minimise the square of the difference between
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e220.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and the empirical
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e221.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with respect to the model parameters using the constrained interior-reflective Newton minimisation method described in
<xref ref-type="bibr" rid="pone.0037547-Coleman1">[35]</xref>
,
<xref ref-type="bibr" rid="pone.0037547-Coleman2">[36]</xref>
, implemented in Matlab (Mathworks Inc., USA). To improve the rate of convergence we normalise the systematic weight terms
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e222.jpg" mimetype="image"></inline-graphic>
</inline-formula>
prior to minimisation, to compensate for their excessive magnitude relative to the cue weights.</p>
<p>For the mean estimation model we set
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e223.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and do not allow for its optimisation. This sub-optimal term is not necessary to explain the gross features of the data.</p>
</sec>
</sec>
</sec>
<sec sec-type="supplementary-material" id="s5">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0037547.s001">
<label>Figure S1</label>
<caption>
<p>
<bold>Overall Task Performance.</bold>
In this figure we show the final absolute deviation of the cursor from the target location for different levels of uncertainty in Task 1 and Task 2. Trials with perturbations are excluded. Note that both tasks give indistinguishable mean-estimation performance, indicating that ability at Task 2 is not compromised by the additional demands of the task. We posit that Task 1 performance is a good indicator of task 2 performance.</p>
<p>(TIF)</p>
</caption>
<media xlink:href="pone.0037547.s001.tif" mimetype="image" mime-subtype="tiff">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0037547.s002">
<label>Figure S2</label>
<caption>
<p>
<bold>Pseudo-Random Cue Sequence Generation.</bold>
In this figure we illustrate the Saundoku Algorithm for generating pseudo-random cue sequences. The purpose of this method is to ensure that cues are counterbalanced across trials so as to minimise systematic biases to the data, while at the same time presenting no additional information to subjects to aid their success at the task.
<bold>(A) Cue generation.</bold>
The sequence of cues to be used for a trial are generated from a pseudo-Normal distribution, created by sampling the Inverse cumulative Normal distribution function at equally spaced intervals (red blobs). The output (black blobs) is distributed
<italic>pseudo-Normally</italic>
, i.e. as the number of samples increases the histogram of the samples converges on the Normal probability density function. These samples are shuffled (blue blobs) to provide a cue sequence. The method of shuffling is illustrated in sub-plots B-E.
<bold>(B) Initial Cues.</bold>
We create a square
<italic>shuffle matrix</italic>
with rows for cue number (in time) and columns for trial number. Each matrix entry corresponds to a cue generated in sub-plot A. We initialise the matrix with diagonals as shown to ensure that each cue appears only once in each trial, and once in every trial. In the figure for clarity we show 60 cues per trial and therefore 60 trials per condition, but in practice we have only 15 cues per trial and 15 trials per condition.
<bold>(C) Trial Shuffle.</bold>
We randomise the order of trials to reduce the correlation between neighbouring trials. This does not violate the constraint that each cue appears only once in each sequence, and in every trial.
<bold>(D) Partial Cue shuffle.</bold>
We then randomise the order of cues within each trial, but we limit the shuffling to within the first, second and final third of the sequence. This maintains the constraint that each cue appears only once in each sequence, and in every trial, and adds the additional constraint that each third contains all cues an equal number of times.
<bold>(E) Random Seed.</bold>
Finally, each entry of the matrix indexes into the shuffled pseudo-Normal sequence in sub-plot A. The resulting plot appears completely random, but we know the correlations between trials, and we know the average mean and variance for the first, second and third block of trials across all trials.</p>
<p>(TIF)</p>
</caption>
<media xlink:href="pone.0037547.s002.tif" mimetype="image" mime-subtype="tiff">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0037547.s003">
<label>Figure S3</label>
<caption>
<p>
<bold>Confidence-Estimation Model Weights.</bold>
To measure the evolution of cue weights we perform a linear regression of the deviation of each cue in the sequence from the current mean estimate to the confidence window width, using data over all trajectories (see main text
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
). In this figure we illustrate the poor match between the empirically observed weights and the model predictions.
<bold>(A) Empirical Data Integration Windows.</bold>
At different time-steps in the trial (indicated by coloured arrows) we compute the weight allocated to all cues in the sequence (coloured curves)
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e224.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the SEM across subjects. The weights assigned to future cues are not shown. This plot reveals that the decision at each time step is due to a weighted average of the cues deviations observed until that point. These weight profiles do not match the model (as we see in E)
<bold>(B) Empirical Data Cue Evolution.</bold>
An alternative visualisation of cue weight evolution shows how the weight allocated to the cues at each of the time steps evolves over the time-course of a trial. We do not show the weight allocated to the cue prior to it being seen. This plot reveals that, shortly after being seen, each cue’s weight increases as it contributes to the estimate, then gradually decays. These weight profiles do not match the model and rise much more slowly (as we see in F).
<bold>(C) Empirical Weights.</bold>
The weight matrix
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e225.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, excluding the systematic component, captures the evolution of cue weights over time (see main text
<italic>
<xref ref-type="sec" rid="s4">Materials and Methods</xref>
</italic>
). When visualised in this way, using colour to represent cue weight, we can see the initial response delay and the evolution of cue combination, as summarised in A and B. This weight matrix only roughly matches the model (as we see in the plot of
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e226.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in G), but the high level of noise makes it difficult to reliably fit the model to the data.
<bold>(D) Empirical Systematic Bias.</bold>
In computing the regression of cue to decision we allow for a systematic component to capture the variability in the trajectory that is not explained by the cue weights. Our model roughly predicts the shape of the systematic component
<bold>(E-F) Model Predictions</bold>
for comparison, with four parameters (
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e227.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e228.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e229.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e230.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) optimised to minimise the difference between
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e231.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0037547.e232.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(plots C and G).</p>
<p>(TIF)</p>
</caption>
<media xlink:href="pone.0037547.s003.tif" mimetype="image" mime-subtype="tiff">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0037547.s004">
<label>Text S1</label>
<caption>
<p>
<bold>Shuffled pseudo-Normal cue sequence generation.</bold>
Further details of the cue sequence generation process.</p>
<p>(PDF)</p>
</caption>
<media xlink:href="pone.0037547.s004.pdf" mimetype="application" mime-subtype="pdf">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0037547.s005">
<label>Text S2</label>
<caption>
<p>
<bold>Cue weight regression algorithm.</bold>
Further details of the method used to compute the contribution of each cue in the trial to the empirical trajectory observed.</p>
<p>(PDF)</p>
</caption>
<media xlink:href="pone.0037547.s005.pdf" mimetype="application" mime-subtype="pdf">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0037547.s006">
<label>Video S1</label>
<caption>
<p>
<bold>Video showing the evolution of the weights contributing to the mean estimate in Task 2.</bold>
The contribution of each weight forms an integration window which changes as evidence arrives. Note that at the final time step the integration window is flat for both the empirical data and the model, indicative of optimal integration weights.</p>
<p>(AVI)</p>
</caption>
<media xlink:href="pone.0037547.s006.avi" mimetype="video" mime-subtype="x-msvideo">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>We would like to thank Dr. O.R.O Oyebode, L. Acerbi and E. Overlingaite for their helpful advice. We would also like to thank the reviewers for their insightful and valuable comments.</p>
</ack>
<fn-group>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
Prof. S. Vijayakumar is supported through a Microsoft Research Royal Academy of Engineering senior research fellowship. This does not alter the authors‘ adherence to all the PLoS ONE policies on sharing data and materials..</p>
</fn>
<fn fn-type="financial-disclosure">
<p>
<bold>Funding: </bold>
This work was supported by an Engineering and Physical Sciences Research Council / Medical Research Council scholarship from the Neuroinformatics and Computational Neuroscience Doctoral Training Centre at the University of Edinburgh. Prof. S. Vijayakumar is supported through a Microsoft Research Royal Academy of Engineering senior research fellowship. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="pone.0037547-Krding1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Bayesian integration in sensorimotor learning.</article-title>
<source>Nature</source>
<volume>427</volume>
<fpage>244</fpage>
<lpage>247</lpage>
<pub-id pub-id-type="pmid">14724638</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Faisal1">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Faisal</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Near optimal combination of sensory and motor uncertainty in time during a naturalistic perception-action task.</article-title>
<source>J Neurophysiol</source>
<volume>101</volume>
<fpage>1901</fpage>
<lpage>1912</lpage>
<pub-id pub-id-type="pmid">19109455</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Landy1">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
<name>
<surname>Johnston</surname>
<given-names>EB</given-names>
</name>
<name>
<surname>Young</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Measurement and modeling of depth cue combination: in defense of weak fusion.</article-title>
<source>Vision Research</source>
<volume>35</volume>
<fpage>389</fpage>
<lpage>412</lpage>
<pub-id pub-id-type="pmid">7892735</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Jacobs1">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jacobs</surname>
<given-names>RA</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Optimal integration of texture and motion cues to depth.</article-title>
<source>Vision Res</source>
<volume>39</volume>
<fpage>3621</fpage>
<lpage>3629</lpage>
<pub-id pub-id-type="pmid">10746132</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Hillis1">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hillis</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Watt</surname>
<given-names>SJ</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Slant from texture and disparity cues: optimal cue combination.</article-title>
<source>J Vis</source>
<volume>4</volume>
<fpage>967</fpage>
<lpage>992</lpage>
<pub-id pub-id-type="pmid">15669906</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Ernst1">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion.</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Heron1">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Heron</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Whitaker</surname>
<given-names>D</given-names>
</name>
<name>
<surname>McGraw</surname>
<given-names>PV</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Sensory uncertainty governs the extent of audio-visual interaction.</article-title>
<source>Vision Res</source>
<volume>44</volume>
<fpage>2875</fpage>
<lpage>2884</lpage>
<pub-id pub-id-type="pmid">15380993</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Wallace1">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wallace</surname>
<given-names>MT</given-names>
</name>
<name>
<surname>Roberson</surname>
<given-names>GE</given-names>
</name>
<name>
<surname>Hairston</surname>
<given-names>WD</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
<name>
<surname>Vaughan</surname>
<given-names>JW</given-names>
</name>
<etal></etal>
</person-group>
<year>2004</year>
<article-title>Unifying multisensory signals across time and space.</article-title>
<source>Exp Brain Res</source>
<volume>158</volume>
<fpage>252</fpage>
<lpage>258</lpage>
<pub-id pub-id-type="pmid">15112119</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Helbig1">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Helbig</surname>
<given-names>HB</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Visual-haptic cue weighting is independent of modality-specific attention.</article-title>
<source>J Vis</source>
<volume>8</volume>
<fpage>21.1</fpage>
<lpage>2116</lpage>
<pub-id pub-id-type="pmid">18318624</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Knill1">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Saunders</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Do humans optimally integrate stereo and texture information for judgments of surface slant?</article-title>
<source>Vision Res</source>
<volume>43</volume>
<fpage>2539</fpage>
<lpage>2558</lpage>
<pub-id pub-id-type="pmid">13129541</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Rosas1">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rosas</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Wagemans</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Wichmann</surname>
<given-names>FA</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination.</article-title>
<source>J Opt Soc Am A Opt Image Sci Vis</source>
<volume>22</volume>
<fpage>801</fpage>
<lpage>809</lpage>
<pub-id pub-id-type="pmid">15898539</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Tassinari1">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tassinari</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Hudson</surname>
<given-names>TE</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Combining priors and noisy visual cues in a rapid pointing task.</article-title>
<source>J Neurosci</source>
<volume>26</volume>
<fpage>10154</fpage>
<lpage>10163</lpage>
<pub-id pub-id-type="pmid">17021171</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Burr1">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burr</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Morrone</surname>
<given-names>MC</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Auditory dominance over vision in the perception of interval duration.</article-title>
<source>Exp Brain Res</source>
<volume>198</volume>
<fpage>49</fpage>
<lpage>57</lpage>
<pub-id pub-id-type="pmid">19597804</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Jacobs2">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jacobs</surname>
<given-names>RA</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>What determines visual cue reliability?</article-title>
<source>Trends Cogn Sci</source>
<volume>6</volume>
<fpage>345</fpage>
<lpage>350</lpage>
<pub-id pub-id-type="pmid">12140085</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Barthelm1">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barthelmé</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Evaluation of objective uncertainty 570 in the visual system.</article-title>
<source>PLoS Comput Biol</source>
<volume>5</volume>
<fpage>e1000504</fpage>
<pub-id pub-id-type="pmid">19750003</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Graf1">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Graf</surname>
<given-names>EW</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Explicit estimation of visual uncertainty in human motion processing.</article-title>
<source>Vision Res</source>
<volume>45</volume>
<fpage>3050</fpage>
<lpage>3059</lpage>
<pub-id pub-id-type="pmid">16182335</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Kepecs1">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kepecs</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Uchida</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Zariwala</surname>
<given-names>HA</given-names>
</name>
<name>
<surname>Mainen</surname>
<given-names>ZF</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Neural correlates, computation and be575 havioural impact of decision confidence.</article-title>
<source>Nature</source>
<volume>455</volume>
<fpage>227</fpage>
<lpage>231</lpage>
<pub-id pub-id-type="pmid">18690210</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Alais1">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>The ventriloquist effect results from near-optimal bimodal integration.</article-title>
<source>Curr Biol</source>
<volume>14</volume>
<fpage>257</fpage>
<lpage>262</lpage>
<pub-id pub-id-type="pmid">14761661</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Helbig2">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Helbig</surname>
<given-names>HB</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Optimal integration of shape information from vision and touch.</article-title>
<source>Exp Brain Res</source>
<volume>179</volume>
<fpage>595</fpage>
<lpage>606</lpage>
<pub-id pub-id-type="pmid">17225091</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Barthelm2">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barthelmé</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Flexible mechanisms underlie the evaluation of visual confidence.</article-title>
<source>Proc Natl Acad Sci U S A</source>
<volume>107</volume>
<fpage>20834</fpage>
<lpage>20839</lpage>
<pub-id pub-id-type="pmid">21076036</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Resulaj1">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Resulaj</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Kiani</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>MN</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Changes of mind in decision-making.</article-title>
<source>Nature</source>
<volume>461</volume>
<fpage>263</fpage>
<lpage>266</lpage>
<pub-id pub-id-type="pmid">19693010</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Gepshtein1">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gepshtein</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Viewing geometry determines how vision and haptics combine in size perception.</article-title>
<source>Curr Biol</source>
<volume>13</volume>
<fpage>483</fpage>
<lpage>488</lpage>
<pub-id pub-id-type="pmid">12646130</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Gepshtein2">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gepshtein</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Burge</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>The combination of vision and touch depends on spatial proximity.</article-title>
<source>J Vis</source>
<volume>5</volume>
<fpage>1013</fpage>
<lpage>1023</lpage>
<pub-id pub-id-type="pmid">16441199</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Nassar1">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nassar</surname>
<given-names>MR</given-names>
</name>
<name>
<surname>Wilson</surname>
<given-names>RC</given-names>
</name>
<name>
<surname>Heasly</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Gold</surname>
<given-names>JI</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>An approximately bayesian delta-rule model explains the dynamics of belief updating in a changing environment.</article-title>
<source>J Neurosci</source>
<volume>30</volume>
<fpage>12366</fpage>
<lpage>12378</lpage>
<pub-id pub-id-type="pmid">20844132</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Bjrkman1">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Björkman</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Juslin</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Winman</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1993</year>
<article-title>Realism of confidence in sensory discrimination: The underconfidence phenomenon.</article-title>
<source>Attention, Perception, & Psychophysics</source>
<volume>54</volume>
<fpage>75</fpage>
<lpage>81</lpage>
</element-citation>
</ref>
<ref id="pone.0037547-Brenner1">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brenner</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Smeets</surname>
<given-names>JBJ</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Continuous visual control of interception.</article-title>
<source>Hum Mov Sci</source>
<volume>30</volume>
<fpage>475</fpage>
<lpage>494</lpage>
<pub-id pub-id-type="pmid">21353717</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Trommershuser1">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Trommershäuser</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Statistical decision theory and the selection of rapid, goal-directed movements.</article-title>
<source>J Opt Soc Am A Opt Image 595 Sci Vis</source>
<volume>20</volume>
<fpage>1419</fpage>
<lpage>1433</lpage>
</element-citation>
</ref>
<ref id="pone.0037547-Schot1">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schot</surname>
<given-names>WD</given-names>
</name>
<name>
<surname>Brenner</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Smeets</surname>
<given-names>JBJ</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Grasping and hitting moving objects.</article-title>
<source>Exp Brain Res</source>
<volume>212</volume>
<fpage>487</fpage>
<lpage>496</lpage>
<pub-id pub-id-type="pmid">21667040</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-vanBeers1">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Beers</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Sittig</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Gon</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Integration of proprioceptive and visual positioninformation: An experimentally supported model.</article-title>
<source>J Neurophysiol</source>
<volume>81</volume>
<fpage>1355</fpage>
<lpage>1364</lpage>
<pub-id pub-id-type="pmid">10085361</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Oru1">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oruç</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Weighted linear cue combination with possibly correlated error.</article-title>
<source>Vision Res</source>
<volume>43</volume>
<fpage>2451</fpage>
<lpage>2468</lpage>
<pub-id pub-id-type="pmid">12972395</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Kiani1">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kiani</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>MN</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Representation of confidence associated with a decision by neurons in the parietal cortex.</article-title>
<source>Science</source>
<volume>324</volume>
<fpage>759</fpage>
<lpage>764</lpage>
<pub-id pub-id-type="pmid">19423820</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Pouget1">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Dayan</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Zemel</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Information processing with population codes.</article-title>
<source>Nat Rev Neurosci</source>
<volume>1</volume>
<fpage>125</fpage>
<lpage>132</lpage>
<pub-id pub-id-type="pmid">11252775</pub-id>
</element-citation>
</ref>
<ref id="pone.0037547-Saunders1">
<label>33</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Saunders</surname>
<given-names>I</given-names>
</name>
</person-group>
<year>2012</year>
<article-title>A Closed-Loop Prosthetic Hand: Understanding Sensorimotor and Multisensory Integration under Uncertainty. Ph.D. thesis, University of Edinburgh.</article-title>
<source>URL</source>
<comment>
<ext-link ext-link-type="uri" xlink:href="http://tiny.cc/wkspv">http://tiny.cc/wkspv</ext-link>
</comment>
</element-citation>
</ref>
<ref id="pone.0037547-Lawson1">
<label>34</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Lawson</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Hanson</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>1974</year>
<article-title>Solving least squares problems. Prentice-Hall.</article-title>
<source>Chapter 23, p.161</source>
</element-citation>
</ref>
<ref id="pone.0037547-Coleman1">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Coleman</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Y</given-names>
</name>
</person-group>
<year>1994</year>
<article-title>On the convergence of reflective newton methods for large-scale nonlinear minimization subject to bounds.</article-title>
<source>Mathematical Programming</source>
<volume>67</volume>
<fpage>189</fpage>
<lpage>224</lpage>
</element-citation>
</ref>
<ref id="pone.0037547-Coleman2">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Coleman</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Y</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>An interior, trust region approach for nonlinear minimization subject to bounds.</article-title>
<source>SIAM Journal on Optimization</source>
<volume>6</volume>
<fpage>418</fpage>
<lpage>445</lpage>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002206 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002206 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3382620
   |texte=   Continuous Evolution of Statistical Estimators for Optimal Decision-Making
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:22761657" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024