Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Temporal context calibrates interval timing

Identifieur interne : 001D22 ( Pmc/Checkpoint ); précédent : 001D21; suivant : 001D23

Temporal context calibrates interval timing

Auteurs : Mehrdad Jazayeri [États-Unis] ; Michael N. Shadlen [États-Unis]

Source :

RBID : PMC:2916084

Abstract

We use our sense of time to identify temporal relationships between events and to anticipate actions. How well we can exploit temporal contingencies depends on the variability of our measurements of time. We asked humans to reproduce time intervals drawn from different underlying distributions. As expected, production times were more variable for longer intervals. Surprisingly however, production times exhibited a systematic regression towards the mean. Consequently, estimates for a sample interval differed depending on the distribution from which it was drawn. A performance-optimizing Bayesian model that takes the underlying distribution of samples into account provided an accurate description of subjects’ performance, variability and bias. This finding suggests that the central nervous system incorporates knowledge about temporal uncertainty to adapt internal timing mechanisms to the temporal statistics of the environment.


Url:
DOI: 10.1038/nn.2590
PubMed: 20581842
PubMed Central: 2916084


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:2916084

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Temporal context calibrates interval timing</title>
<author>
<name sortKey="Jazayeri, Mehrdad" sort="Jazayeri, Mehrdad" uniqKey="Jazayeri M" first="Mehrdad" last="Jazayeri">Mehrdad Jazayeri</name>
<affiliation>
<nlm:aff id="A1"> Helen Hay Whitney Foundation</nlm:aff>
</affiliation>
<affiliation wicri:level="2">
<nlm:aff id="A2"> HHMI, NPRC, Department of Physiology and Biophysics, University of Washington, Seattle, Washington</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<placeName>
<region type="state">Washington (État)</region>
</placeName>
<wicri:cityArea> HHMI, NPRC, Department of Physiology and Biophysics, University of Washington, Seattle</wicri:cityArea>
</affiliation>
</author>
<author>
<name sortKey="Shadlen, Michael N" sort="Shadlen, Michael N" uniqKey="Shadlen M" first="Michael N." last="Shadlen">Michael N. Shadlen</name>
<affiliation wicri:level="2">
<nlm:aff id="A2"> HHMI, NPRC, Department of Physiology and Biophysics, University of Washington, Seattle, Washington</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<placeName>
<region type="state">Washington (État)</region>
</placeName>
<wicri:cityArea> HHMI, NPRC, Department of Physiology and Biophysics, University of Washington, Seattle</wicri:cityArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">20581842</idno>
<idno type="pmc">2916084</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2916084</idno>
<idno type="RBID">PMC:2916084</idno>
<idno type="doi">10.1038/nn.2590</idno>
<date when="2010">2010</date>
<idno type="wicri:Area/Pmc/Corpus">002554</idno>
<idno type="wicri:Area/Pmc/Curation">002554</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001D22</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Temporal context calibrates interval timing</title>
<author>
<name sortKey="Jazayeri, Mehrdad" sort="Jazayeri, Mehrdad" uniqKey="Jazayeri M" first="Mehrdad" last="Jazayeri">Mehrdad Jazayeri</name>
<affiliation>
<nlm:aff id="A1"> Helen Hay Whitney Foundation</nlm:aff>
</affiliation>
<affiliation wicri:level="2">
<nlm:aff id="A2"> HHMI, NPRC, Department of Physiology and Biophysics, University of Washington, Seattle, Washington</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<placeName>
<region type="state">Washington (État)</region>
</placeName>
<wicri:cityArea> HHMI, NPRC, Department of Physiology and Biophysics, University of Washington, Seattle</wicri:cityArea>
</affiliation>
</author>
<author>
<name sortKey="Shadlen, Michael N" sort="Shadlen, Michael N" uniqKey="Shadlen M" first="Michael N." last="Shadlen">Michael N. Shadlen</name>
<affiliation wicri:level="2">
<nlm:aff id="A2"> HHMI, NPRC, Department of Physiology and Biophysics, University of Washington, Seattle, Washington</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<placeName>
<region type="state">Washington (État)</region>
</placeName>
<wicri:cityArea> HHMI, NPRC, Department of Physiology and Biophysics, University of Washington, Seattle</wicri:cityArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Nature neuroscience</title>
<idno type="ISSN">1097-6256</idno>
<idno type="eISSN">1546-1726</idno>
<imprint>
<date when="2010">2010</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p id="P1">We use our sense of time to identify temporal relationships between events and to anticipate actions. How well we can exploit temporal contingencies depends on the variability of our measurements of time. We asked humans to reproduce time intervals drawn from different underlying distributions. As expected, production times were more variable for longer intervals. Surprisingly however, production times exhibited a systematic regression towards the mean. Consequently, estimates for a sample interval differed depending on the distribution from which it was drawn. A performance-optimizing Bayesian model that takes the underlying distribution of samples into account provided an accurate description of subjects’ performance, variability and bias. This finding suggests that the central nervous system incorporates knowledge about temporal uncertainty to adapt internal timing mechanisms to the temporal statistics of the environment.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Mauk, Md" uniqKey="Mauk M">MD Mauk</name>
</author>
<author>
<name sortKey="Buonomano, Dv" uniqKey="Buonomano D">DV Buonomano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gallistel, Cr" uniqKey="Gallistel C">CR Gallistel</name>
</author>
<author>
<name sortKey="Gibbon, J" uniqKey="Gibbon J">J Gibbon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rakitin, Bc" uniqKey="Rakitin B">BC Rakitin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brannon, Em" uniqKey="Brannon E">EM Brannon</name>
</author>
<author>
<name sortKey="Libertus, Me" uniqKey="Libertus M">ME Libertus</name>
</author>
<author>
<name sortKey="Meck, Wh" uniqKey="Meck W">WH Meck</name>
</author>
<author>
<name sortKey="Woldorff, Mg" uniqKey="Woldorff M">MG Woldorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibbon, J" uniqKey="Gibbon J">J Gibbon</name>
</author>
<author>
<name sortKey="Church, Rm" uniqKey="Church R">RM Church</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reutimann, J" uniqKey="Reutimann J">J Reutimann</name>
</author>
<author>
<name sortKey="Yakovlev, V" uniqKey="Yakovlev V">V Yakovlev</name>
</author>
<author>
<name sortKey="Fusi, S" uniqKey="Fusi S">S Fusi</name>
</author>
<author>
<name sortKey="Senn, W" uniqKey="Senn W">W Senn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Matell, Ms" uniqKey="Matell M">MS Matell</name>
</author>
<author>
<name sortKey="Meck, Wh" uniqKey="Meck W">WH Meck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ahrens, M" uniqKey="Ahrens M">M Ahrens</name>
</author>
<author>
<name sortKey="Sahani, M" uniqKey="Sahani M">M Sahani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Casella, G" uniqKey="Casella G">G Casella</name>
</author>
<author>
<name sortKey="Berger, Rl" uniqKey="Berger R">RL Berger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, Pa" uniqKey="Lewis P">PA Lewis</name>
</author>
<author>
<name sortKey="Miall, Rc" uniqKey="Miall R">RC Miall</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Treisman, M" uniqKey="Treisman M">M Treisman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hollingworth, Hl" uniqKey="Hollingworth H">HL Hollingworth</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parducci, A" uniqKey="Parducci A">A Parducci</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Helson, H" uniqKey="Helson H">H Helson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D Kersten</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P Mamassian</name>
</author>
<author>
<name sortKey="Yuille, A" uniqKey="Yuille A">A Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Kording</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Richards, W" uniqKey="Richards W">W Richards</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P Mamassian</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miyazaki, M" uniqKey="Miyazaki M">M Miyazaki</name>
</author>
<author>
<name sortKey="Nozaki, D" uniqKey="Nozaki D">D Nozaki</name>
</author>
<author>
<name sortKey="Nakajima, Y" uniqKey="Nakajima Y">Y Nakajima</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hudson, Te" uniqKey="Hudson T">TE Hudson</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bernardo, Jm" uniqKey="Bernardo J">JM Bernardo</name>
</author>
<author>
<name sortKey="Smith, Afm" uniqKey="Smith A">AFM Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocker, Aa" uniqKey="Stocker A">AA Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trommershauser, J" uniqKey="Trommershauser J">J Trommershauser</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P Mamassian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jacobs, Ra" uniqKey="Jacobs R">RA Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tassinari, H" uniqKey="Tassinari H">H Tassinari</name>
</author>
<author>
<name sortKey="Hudson, Te" uniqKey="Hudson T">TE Hudson</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Graf, Ew" uniqKey="Graf E">EW Graf</name>
</author>
<author>
<name sortKey="Warren, Pa" uniqKey="Warren P">PA Warren</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Kording</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Raphan, M" uniqKey="Raphan M">M Raphan</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Toth, Lj" uniqKey="Toth L">LJ Toth</name>
</author>
<author>
<name sortKey="Assad, Ja" uniqKey="Assad J">JA Assad</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lauwereyns, J" uniqKey="Lauwereyns J">J Lauwereyns</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shadlen, Mn" uniqKey="Shadlen M">MN Shadlen</name>
</author>
<author>
<name sortKey="Newsome, Wt" uniqKey="Newsome W">WT Newsome</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gold, Ji" uniqKey="Gold J">JI Gold</name>
</author>
<author>
<name sortKey="Law, Ct" uniqKey="Law C">CT Law</name>
</author>
<author>
<name sortKey="Connolly, P" uniqKey="Connolly P">P Connolly</name>
</author>
<author>
<name sortKey="Bennur, S" uniqKey="Bennur S">S Bennur</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Janssen, P" uniqKey="Janssen P">P Janssen</name>
</author>
<author>
<name sortKey="Shadlen, Mn" uniqKey="Shadlen M">MN Shadlen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maimon, G" uniqKey="Maimon G">G Maimon</name>
</author>
<author>
<name sortKey="Assad, Ja" uniqKey="Assad J">JA Assad</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schultz, W" uniqKey="Schultz W">W Schultz</name>
</author>
<author>
<name sortKey="Romo, R" uniqKey="Romo R">R Romo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meck, Wh" uniqKey="Meck W">WH Meck</name>
</author>
<author>
<name sortKey="Penney, Tb" uniqKey="Penney T">TB Penney</name>
</author>
<author>
<name sortKey="Pouthas, V" uniqKey="Pouthas V">V Pouthas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cui, X" uniqKey="Cui X">X Cui</name>
</author>
<author>
<name sortKey="Stetson, C" uniqKey="Stetson C">C Stetson</name>
</author>
<author>
<name sortKey="Montague, Pr" uniqKey="Montague P">PR Montague</name>
</author>
<author>
<name sortKey="Eagleman, Dm" uniqKey="Eagleman D">DM Eagleman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nobre, A" uniqKey="Nobre A">A Nobre</name>
</author>
<author>
<name sortKey="Correa, A" uniqKey="Correa A">A Correa</name>
</author>
<author>
<name sortKey="Coull, J" uniqKey="Coull J">J Coull</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rao, Sm" uniqKey="Rao S">SM Rao</name>
</author>
<author>
<name sortKey="Mayer, Ar" uniqKey="Mayer A">AR Mayer</name>
</author>
<author>
<name sortKey="Harrington, Dl" uniqKey="Harrington D">DL Harrington</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Allan, Lg" uniqKey="Allan L">LG Allan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Creelman, Cd" uniqKey="Creelman C">CD Creelman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, Ih" uniqKey="Lee I">IH Lee</name>
</author>
<author>
<name sortKey="Assad, Ja" uniqKey="Assad J">JA Assad</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mita, A" uniqKey="Mita A">A Mita</name>
</author>
<author>
<name sortKey="Mushiake, H" uniqKey="Mushiake H">H Mushiake</name>
</author>
<author>
<name sortKey="Shima, K" uniqKey="Shima K">K Shima</name>
</author>
<author>
<name sortKey="Matsuzaka, Y" uniqKey="Matsuzaka Y">Y Matsuzaka</name>
</author>
<author>
<name sortKey="Tanji, J" uniqKey="Tanji J">J Tanji</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Okano, K" uniqKey="Okano K">K Okano</name>
</author>
<author>
<name sortKey="Tanji, J" uniqKey="Tanji J">J Tanji</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tanaka, M" uniqKey="Tanaka M">M Tanaka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tanaka, M" uniqKey="Tanaka M">M Tanaka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buonomano, Dv" uniqKey="Buonomano D">DV Buonomano</name>
</author>
<author>
<name sortKey="Maass, W" uniqKey="Maass W">W Maass</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<pmc-dir>properties manuscript</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-journal-id">9809671</journal-id>
<journal-id journal-id-type="pubmed-jr-id">21092</journal-id>
<journal-id journal-id-type="nlm-ta">Nat Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Nat. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Nature neuroscience</journal-title>
</journal-title-group>
<issn pub-type="ppub">1097-6256</issn>
<issn pub-type="epub">1546-1726</issn>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">20581842</article-id>
<article-id pub-id-type="pmc">2916084</article-id>
<article-id pub-id-type="doi">10.1038/nn.2590</article-id>
<article-id pub-id-type="manuscript">NIHMS209209</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Temporal context calibrates interval timing</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Jazayeri</surname>
<given-names>Mehrdad</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
<xref ref-type="aff" rid="A2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Shadlen</surname>
<given-names>Michael N.</given-names>
</name>
<xref ref-type="aff" rid="A2">2</xref>
</contrib>
</contrib-group>
<aff id="A1">
<label>1</label>
Helen Hay Whitney Foundation</aff>
<aff id="A2">
<label>2</label>
HHMI, NPRC, Department of Physiology and Biophysics, University of Washington, Seattle, Washington</aff>
<author-notes>
<corresp id="FN1">Correspondence: Mehrdad Jazayeri, Department of Physiology and Biophysics, University of Washington, Box 357290, Seattle, WA 98195, Telephone: 206.616.3308, Fax: 206.543.1196,
<email>mjaz@u.washington.edu</email>
</corresp>
</author-notes>
<pub-date pub-type="nihms-submitted">
<day>2</day>
<month>6</month>
<year>2010</year>
</pub-date>
<pub-date pub-type="epub">
<day>27</day>
<month>6</month>
<year>2010</year>
</pub-date>
<pub-date pub-type="ppub">
<month>8</month>
<year>2010</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>01</day>
<month>2</month>
<year>2011</year>
</pub-date>
<volume>13</volume>
<issue>8</issue>
<fpage>1020</fpage>
<lpage>1026</lpage>
<pmc-comment>elocation-id from pubmed: 10.1038/nn.2590</pmc-comment>
<permissions>
<license>
<license-p>Users may view, print, copy, download and text and data- mine the content in such documents, for the purposes of academic research, subject always to the full Conditions of use:
<ext-link ext-link-type="uri" xlink:href="http://www.nature.com/authors/editorial_policies/license.html#terms">http://www.nature.com/authors/editorial_policies/license.html#terms</ext-link>
</license-p>
</license>
</permissions>
<abstract>
<p id="P1">We use our sense of time to identify temporal relationships between events and to anticipate actions. How well we can exploit temporal contingencies depends on the variability of our measurements of time. We asked humans to reproduce time intervals drawn from different underlying distributions. As expected, production times were more variable for longer intervals. Surprisingly however, production times exhibited a systematic regression towards the mean. Consequently, estimates for a sample interval differed depending on the distribution from which it was drawn. A performance-optimizing Bayesian model that takes the underlying distribution of samples into account provided an accurate description of subjects’ performance, variability and bias. This finding suggests that the central nervous system incorporates knowledge about temporal uncertainty to adapt internal timing mechanisms to the temporal statistics of the environment.</p>
</abstract>
</article-meta>
</front>
<body>
<p id="P2">From simple habitual responses to complex sensorimotor skills, our behavioral repertoire exhibits a remarkable sensitivity to timing information. To internalize temporal contingencies, and to put them to use in the control of conditioned and deliberative behavior, our nervous systems must be equipped with central mechanisms to process time.</p>
<p id="P3">Among the elementary aspect of temporal processing, and one that has been the focus of many psychophysical studies of time perception, is the ability to measure the duration between events; i.e., interval timing
<xref rid="R1" ref-type="bibr">1</xref>
. A common feature associated with repeated estimation (or production) of a sample interval is that the standard deviation of the estimated (or produced) intervals increases linearly with their mean, a property that is termed scalar variability
<xref rid="R2" ref-type="bibr">2</xref>
<xref rid="R4" ref-type="bibr">4</xref>
. While previous work has demonstrated how suitable forms of internal noise might lead to scalar variability
<xref rid="R5" ref-type="bibr">5</xref>
<xref rid="R8" ref-type="bibr">8</xref>
, we do not know whether and how the nervous system can make use of this lawful relationship to improve timing behavior.</p>
<p id="P4">Scalar variability implies that measurements of relatively longer intervals are less reliable and thus more uncertain. The question we address is whether subjects have knowledge about this uncertainty, and how they might exploit it to improve estimation and production of time intervals. This question is particularly important when one has prior expectations of how long an event might last. For instance, if one measures an interval to be ~1.5 s but, based on experience, expects it to be closer to 1.2 s, then s/he may conclude that the true interval was probably somewhere between 1.2 and 1.5 s. More generally, knowledge about the distribution of time intervals one may encounter – which we refer to as temporal context – could help reduce uncertainty. The extent to which temporal context should inform temporal judgments depends on how unreliable measurements of time are. While a metronome need not rely on temporal context to stay on the beat, a piano player may well use the tempo of a musical piece to coordinate finger movements in time. Thus, to make use of the oft-present temporal context, the brain must have knowledge about the reliability of its own measurements of time.</p>
<p id="P5">The question of how knowledge about temporal context may improve measurements of elapsed time can be posed rigorously in the framework of statistical inference. In this framework, to estimate a sample interval, the observer may take advantage of two sources of information: (1) the likelihood function, which quantifies the statistics of sample intervals consistent with a measurement, and (2) the prior probability distribution function of the sample intervals the observer may encounter. One possibility is for the observer to ignore the prior distribution, and to choose the most likely value directly from the likelihood function, a strategy known as the maximum-likelihood estimation (ML)
<xref rid="R9" ref-type="bibr">9</xref>
. Alternatively, a Bayesian observer would combine the likelihood function and the prior, and use some statistic to map the resulting posterior probability distribution onto an estimate. Common mapping rules are the maximum a posteriori (MAP) and Bayes Least Squares (BLS), which correspond to the mode and the mean of the posterior respectively.</p>
<p id="P6">To understand how humans evaluate their measurements of elapsed time in the presence of a temporal context, we asked subjects to estimate and subsequently reproduce time intervals in the sub-second to seconds range that were drawn from three different prior distributions. Subjects’ production times showed a clear dependence on both the sample intervals and the prior distribution from which they were drawn. We fitted subjects’ responses to various observer models such as ML, MAP and BLS and found that a Bayesian observer associated with the BLS could account for the bias, variability and overall performance of every subject in all three prior conditions. This suggests that subjects have implicit knowledge of the reliability of their measurements of time, and can use this information to adjust their timing behavior to the temporal regularities of the environment. Furthermore, our observer model shows that this sophisticated Bayesian behavior can be accounted for by a nonlinear transformation that simply and directly maps noisy measurement of time to optimal estimates.</p>
<sec sec-type="results" id="S1">
<title>Results</title>
<sec id="S2">
<title>The Ready-Set-Go paradigm</title>
<p id="P7">Subjects had to measure, and immediately afterwards reproduce different sample intervals. A sample interval,
<italic>t
<sub>s</sub>
</italic>
, was demarcated by two brief flashes, a “Ready” cue followed by a “Set” cue. The corresponding production time,
<italic>t
<sub>p</sub>
</italic>
, was measured from the time of the “Set” cue to when subjects proactively responded via a manual key press (
<xref rid="F1" ref-type="fig">Fig. 1a</xref>
). Subjects received feedback for sufficiently accurate production times (
<xref rid="F1" ref-type="fig">Fig. 1c</xref>
).</p>
<p id="P8">In each session, sample intervals were drawn from a discrete uniform prior distribution. For each subject, three partially overlapping prior distributions (“Short”, “Intermediate” and “Long”) were tested (
<xref rid="F1" ref-type="fig">Fig. 1b</xref>
). The main data for each prior condition were collected after an initial learning stage (typically 500 trials) to ensure subjects had time to adapt their responses to the range of sample intervals presented.</p>
<p id="P9">Subjects’ timing behavior exhibited three characteristic features (
<xref rid="F2" ref-type="fig">Fig. 2</xref>
). First, production times monotonically increased with sample intervals. Second, for each prior condition, production times were systematically biased towards the mean of the prior as evident from their tendency to deviate from sample intervals (diagonal dashed line) and gravitate towards the mean interval (horizontal dashed line)
<xref rid="R10" ref-type="bibr">10</xref>
<xref rid="R12" ref-type="bibr">12</xref>
. Consequently, mean production times associated with a particular
<italic>t
<sub>s</sub>
</italic>
were differentially biased for the three prior conditions. Third, production time biases were more pronounced in the “Intermediate”, and more so, in the “Long” prior conditions, indicating that longer sample intervals were associated with progressively stronger prior-dependent biases. Similarly, within each prior condition, the magnitude of the bias was larger for the longest sample interval compared to the shortest sample interval (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. S1</xref>
).</p>
<p id="P10">Scalar variability implies that the measurement of longer sample intervals engender more uncertainty. According to Bayesian theory, for these more uncertain measurements, subjects’ performance would improve if they rely more on their prior expectation (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. S2</xref>
). This is consistent with the observed increases in prior-dependent biases associated with longer sample intervals and suggests that subjects might have adopted a Bayesian strategy to reproduce time intervals. We thus developed probabilistic observer models to evaluate these observations quantitatively and to understand the computations from which they might arise.</p>
</sec>
<sec id="S3">
<title>The observer model</title>
<p id="P11">The observer model is presented with a sample interval,
<italic>t
<sub>s</sub>
</italic>
. Due to measurement noise, the measured interval,
<italic>t
<sub>m</sub>
</italic>
, may differ from
<italic>t
<sub>s</sub>
</italic>
. The observer must use
<italic>t
<sub>m</sub>
</italic>
to compute an estimate,
<italic>t
<sub>e</sub>
</italic>
, for the sample interval,
<italic>t
<sub>s</sub>
</italic>
. To do so, the observer may use an estimator that relies on probabilistic sources of information such as the likelihood function and the prior distribution. Importantly however, the estimator itself is fully characterized by a deterministic function,
<italic>f</italic>
, that maps a measurement,
<italic>t
<sub>m</sub>
</italic>
, to an estimate,
<italic>t
<sub>e</sub>
</italic>
; i.e.
<italic>t
<sub>e</sub>
</italic>
=
<italic>f</italic>
(
<italic>t
<sub>m</sub>
</italic>
). Finally additional noise during the production phase may cause the production time,
<italic>t
<sub>p</sub>
</italic>
, to differ from
<italic>t
<sub>e</sub>
</italic>
(
<xref rid="F3" ref-type="fig">Fig. 3a</xref>
).</p>
<p id="P12">To formulate the model mathematically, we need to specify the relationship between the sample interval,
<italic>t
<sub>s</sub>
</italic>
, the measured interval,
<italic>t
<sub>m</sub>
</italic>
, the estimate,
<italic>t
<sub>e</sub>
</italic>
, and the production time,
<italic>t
<sub>p</sub>
</italic>
. The relationship between
<italic>t
<sub>m</sub>
</italic>
and
<italic>t
<sub>s</sub>
</italic>
can be quantified by the conditional probability distribution,
<italic>p</italic>
(
<italic>t
<sub>m</sub>
</italic>
|
<italic>t
<sub>s</sub>
</italic>
), the probability of different measurements for a specific sample interval. This distribution also specifies the likelihood function,
<italic>λ
<sub>t
<sub>m</sub>
</sub>
</italic>
(
<italic>t
<sub>s</sub>
</italic>
), a statistical description of the different sample intervals associated with a fixed measurement. We modelled
<italic>p</italic>
(
<italic>t
<sub>m</sub>
</italic>
|
<italic>t
<sub>s</sub>
</italic>
) as a Gaussian distribution centered at
<italic>t
<sub>s</sub>
</italic>
, and motivated by the scalar variability of timing, assumed that the standard deviation of
<italic>p</italic>
(
<italic>t
<sub>m</sub>
</italic>
|
<italic>t
<sub>s</sub>
</italic>
) grows linearly with its mean (
<xref rid="F3" ref-type="fig">Fig. 3a</xref>
). The distribution of measurement noise was thus fully characterized by the ratio of the standard deviation to the mean of
<italic>p</italic>
(
<italic>t
<sub>m</sub>
</italic>
|
<italic>t
<sub>s</sub>
</italic>
), which we will refer to as the Weber fraction associated with the measurement,
<italic>w
<sub>m</sub>
</italic>
. With the same arguments in mind, we assumed that the distribution of
<italic>t
<sub>p</sub>
</italic>
conditioned on
<italic>t
<sub>e</sub>
</italic>
,
<italic>p</italic>
(
<italic>t
<sub>p</sub>
</italic>
|
<italic>t
<sub>e</sub>
</italic>
) is also Gaussian, is centered at
<italic>t
<sub>e</sub>
</italic>
, and is associated with a constant Weber fraction,
<italic>w
<sub>p</sub>
</italic>
.</p>
<p id="P13">Finally, the relationship between
<italic>t
<sub>m</sub>
</italic>
and
<italic>t
<sub>e</sub>
</italic>
was modelled by a deterministic mapping function,
<italic>f</italic>
, which we will refer to as the estimator. Different estimators are associated with different mapping rules. Among them, we focused on the ML, MAP and BLS because of their well-known properties, and because they were most germane to the development of our arguments with respect to the psychophysical data. We denote the corresponding estimators by
<italic>f
<sub>ML</sub>
</italic>
,
<italic>f
<sub>MAP</sub>
</italic>
, and
<italic>f
<sub>BLS</sub>
</italic>
respectively (
<xref rid="F3" ref-type="fig">Fig. 3b–d</xref>
).</p>
<p id="P14">The
<italic>f
<sub>ML</sub>
</italic>
estimator assigns
<italic>t
<sub>e</sub>
</italic>
to the peak of the likelihood function (
<xref rid="F4" ref-type="fig">Fig. 4a</xref>
). In our model, with a Gaussian-distributed measurement noise and a constant Weber fraction,
<italic>t
<sub>e</sub>
</italic>
would be proportional to
<italic>t
<sub>m</sub>
</italic>
(see Methods). The
<italic>f
<sub>MAP</sub>
</italic>
and
<italic>f
<sub>BLS</sub>
</italic>
estimators, on the other hand, rely on the posterior distribution, which is proportional to the product of the prior distribution and the likelihood function. Because the prior distribution we used was uniform, the posterior was a scaled replica of the likelihood function within the domain of the prior and zero elsewhere. The MAP rule extracts the mode of the posterior, which would correspond to the peak of the likelihood function except when the peak falls below/above the prior distribution’s shortest/longest sample interval. Thus,
<italic>f
<sub>MAP</sub>
</italic>
is the same as
<italic>f
<sub>ML</sub>
</italic>
with the difference that its range is limited to the domain of the prior (
<xref rid="F4" ref-type="fig">Fig. 4b</xref>
). For BLS, which is associated with the mean of the posterior, the estimator,
<italic>f
<sub>BLS</sub>
</italic>
, is a sigmoid function of
<italic>t
<sub>m</sub>
</italic>
(
<xref rid="F4" ref-type="fig">Fig. 4c</xref>
).</p>
<p id="P15">Note that since the specification of these estimators does not invoke any additional free parameters, the observer model associated with each estimator was fully characterized by two free parameters only:
<italic>w
<sub>m</sub>
</italic>
and
<italic>w
<sub>p</sub>
</italic>
.</p>
</sec>
<sec id="S4" sec-type="methods">
<title>Comparing experimental data with the observer model</title>
<p id="P16">Our psychophysical data consisted of pairs of sample intervals and production times (
<italic>t
<sub>s</sub>
</italic>
and
<italic>t
<sub>p</sub>
</italic>
), but the observer model we created to relate
<italic>t
<sub>s</sub>
</italic>
to
<italic>t
<sub>p</sub>
</italic>
relies on two intervening and unobservable (hidden) variables,
<italic>t
<sub>m</sub>
</italic>
and
<italic>t
<sub>e</sub>
</italic>
. We thus expressed these two hidden variables in terms of their probabilistic relationship to the observable variables
<italic>t
<sub>s</sub>
</italic>
and
<italic>t
<sub>p</sub>
</italic>
(see Methods), and derived a direct relationship between production times and sample intervals. This formulation was then used to examine which of the three observer models described human subjects’ responses best.</p>
<p id="P17">To compare human subjects’ responses to those predicted by the observer models, we quantified production times with two statistics, their variance (VAR), and their bias (BIAS) (
<xref rid="F5" ref-type="fig">Fig. 5a</xref>
), which together partition the overall root mean squared error (RMSE) as follows:
<disp-formula id="FD1">
<mml:math id="M1" display="block" overflow="scroll">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mtext>RMSE</mml:mtext>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mtext>VAR</mml:mtext>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mtext>BIAS</mml:mtext>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P18">This relationship, which highlights the familiar trade off between the VAR and BIAS, when written as the sum of squares, becomes the standard equation of a circle:
<disp-formula id="FD2">
<mml:math id="M2" display="block" overflow="scroll">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mtext>RMSE</mml:mtext>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mtext>(VAR</mml:mtext>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mtext>1</mml:mtext>
<mml:mo>/</mml:mo>
<mml:mtext>2</mml:mtext>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mtext>)</mml:mtext>
</mml:mrow>
</mml:mrow>
<mml:mtext>2</mml:mtext>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mtext>BIAS</mml:mtext>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P19">This geometric description indicates that in a plot of VAR
<sup>1/2</sup>
versus BIAS, a continuum of values along a quarter circle would lead to the same RMSE (
<xref rid="F5" ref-type="fig">Fig. 5b</xref>
). It also provides a convenient graphical description for how a larger RMSE represented by a quarter circle with a larger radius may arise from increases in VAR
<sup>1/2</sup>
, BIAS or both. We used this plot to summarize the statistics of production times, and also to evaluate the degree to which different observer models could capture those statistics.</p>
<p id="P20">We fitted the parameters of the ML, MAP and BLS models (
<italic>w
<sub>m</sub>
</italic>
and
<italic>w
<sub>p</sub>
</italic>
) for each subject, based on the production times in the three prior conditions (
<xref rid="F6" ref-type="fig">Fig. 6a–c</xref>
inset). We then simulated each subject’s behavior using the fitted observer models, and compared each model’s predictions to the actual responses using the BIAS, VAR
<sup>1/2</sup>
, and RMSE statistics (
<xref rid="F5" ref-type="fig">Fig. 5c,e,g</xref>
).</p>
<p id="P21">The ML model did not exhibit the prior-dependent biases present in production times (
<xref rid="F5" ref-type="fig">Fig. 5c,d</xref>
), because the ML estimator does not take the prior into account. This failure cannot be attributed to an unsuccessful fitting procedure or a misrepresentation of the likelihood function. The fact that subjects’ production times depended on the prior condition would render any estimator that neglects the prior (e.g. ML) inadequate, the parametric form of the likelihood function notwithstanding. The MAP model was slightly better than the ML model at capturing the trade-off between BIAS and VAR (
<xref rid="F5" ref-type="fig">Fig. 5e,f</xref>
), but it also underestimated the bias of the production times and overestimated their variance for all subjects (
<xref rid="F6" ref-type="fig">Fig. 6b</xref>
). The BLS model on the other hand, mimicked the bias and variance of the production times quite well (
<xref rid="F5" ref-type="fig">Fig. 5g</xref>
). It captured the overall RMSE as well as the trade off between the VAR and the BIAS (
<xref rid="F5" ref-type="fig">Fig. 5h</xref>
), and was statistically superior to both ML and MAP estimators across our subjects (
<xref rid="F6" ref-type="fig">Fig. 6c</xref>
).</p>
<p id="P22">We evaluated several variants of the BLS model by incorporating different assumptions concerning the measurement and production noise. In our main model (
<xref rid="F4" ref-type="fig">Fig. 4c</xref>
), we fit Weber fractions for both sources of noise (
<italic>w
<sub>m</sub>
</italic>
and
<italic>w
<sub>p</sub>
</italic>
), consistent with the observation that, for all subjects, the standard deviation of the production times was roughly proportional to the mean (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. S4</xref>
). We also considered the possibility that the standard deviation of either the measurement noise or the production noise scales with the base interval, whereas the other noise source has constant standard deviation (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Table S2</xref>
). For all subjects, the original BLS model outperformed the model in which the measurement noise had a constant standard deviation, and for 5 out of 6 subjects, it outperformed the alternative in which the production noise had a constant standard deviation (Akaike Information Criterion,
<xref rid="SD1" ref-type="supplementary-material">Supplementary Table S1</xref>
). Moreover, a BLS model in which Weber fractions were assumed identical (
<italic>w
<sub>m</sub>
</italic>
=
<italic>w
<sub>p</sub>
</italic>
) was inferior to the original BLS model (log likelihood ratio test for nested models; p< 0.03 for one subject and p<1e–7 for others). The importance of the measurement and production Weber fractions in accounting for the bias and variability of production times was also evident in model simulations (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. S5</xref>
).</p>
<p id="P23">Because our observer models were described by two parameters only (
<italic>w
<sub>m</sub>
</italic>
and
<italic>w
<sub>p</sub>
</italic>
), and all models used the same number of parameters, we were reasonably confident that the success of the BLS rule was not due to over-fitting. Nonetheless, we tested for this possibility by fitting the model to data from the “Short” and “Long” prior conditions. The fits captured the statistics of the “Intermediate” prior condition equally well. Finally, we note that the fits for the BLS and MAP rules did not differ systematically (
<xref rid="F6" ref-type="fig">Fig. 6a–c</xref>
, insets). Therefore, the success of the BLS model cannot be attributed to the constraints inherent to our fitting procedure, but rather to its superior description of the estimator subjects adopted in this task.</p>
</sec>
</sec>
<sec sec-type="discussion" id="S5">
<title>Discussion</title>
<p id="P24">Our central finding is that humans can exploit the uncertainty associated with measurements of elapsed time to optimize their timed responses to the statistics of the intervals they encounter. This conclusion is based on the success of a Bayesian observer model that accurately captured the statistics of subjects’ production times in a simple time reproduction task.</p>
<p id="P25">A characteristic feature of subjects’ production times was that they were systematically biased towards the mean of the distribution of sample intervals. This observation is consistent with the ubiquitous central tendency of psychophysical responses in categorical judgment and motor production
<xref rid="R10" ref-type="bibr">10</xref>
<xref rid="R14" ref-type="bibr">14</xref>
. Previous work, such as the adaptation-level theory
<xref rid="R14" ref-type="bibr">14</xref>
, and range-frequency theory
<xref rid="R13" ref-type="bibr">13</xref>
attributed these so-called range effects to subjects’ tendency to evaluate a stimulus based on its relation to the set of stimuli from which it is drawn. These theories, however, did not offer an explanation for what gives rise to such range effects in the first place, and whether or not they are of any value. In contrast, our work suggests that it is subjects’ (implicit) knowledge of their temporal uncertainty that determines the strength of the range effect. Moreover, the Bayesian account of range effects suggest that production time biases help — not harm — subjects’ overall performance (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. S2,6</xref>
). In what follows, we explain the novel aspects of our Bayesian model, and then discuss its implications for the neurobiology of interval timing.</p>
<sec id="S6">
<title>Bayesian interval timing</title>
<p id="P26">Bayesian models have had great success in describing a variety of phenomena in vision and sensorimotor control
<xref rid="R15" ref-type="bibr">15</xref>
<xref rid="R18" ref-type="bibr">18</xref>
, as well as interval timing
<xref rid="R19" ref-type="bibr">19</xref>
,
<xref rid="R20" ref-type="bibr">20</xref>
. Symptomatic to these models are prior-dependent biases whose magnitude increases for progressively less reliable measurements
<xref rid="R21" ref-type="bibr">21</xref>
. Motivated by the observation of such biases in our subjects’ behavior, and the success of a previous Bayesian model of coincidence timing
<xref rid="R19" ref-type="bibr">19</xref>
, we set out to formulate a Bayesian model for time reproduction.</p>
<p id="P27">The model consisted of three stages. The first stage emulated a noisy measurement process that quantified the probabilistic relationship between the sample intervals and the corresponding noise-perturbed measurements
<xref rid="R22" ref-type="bibr">22</xref>
. In the second stage, a Bayesian estimator computed an estimate of the sample interval from the measurement. Finally, a noisy production stage converted estimates to production times
<xref rid="R23" ref-type="bibr">23</xref>
,
<xref rid="R24" ref-type="bibr">24</xref>
. In line with previous work on interval timing, the measurement and production noise exhibited scalar variability
<xref rid="R2" ref-type="bibr">2</xref>
,
<xref rid="R3" ref-type="bibr">3</xref>
,
<xref rid="R5" ref-type="bibr">5</xref>
,
<xref rid="R7" ref-type="bibr">7</xref>
.</p>
<p id="P28">The estimator in the second stage of the model defines a deterministic mapping of measurements to estimates, and its functional form is determined precisely from the likelihood function, the prior distribution, and the cost (loss) function. The success of a Bayesian estimator thus depends on how well the likelihood, the prior and the cost function are constrained.</p>
<p id="P29">In psychophysical settings, because sensory measurements are not directly accessible, the likelihood function must be inferred from behavior and suitable assumptions about the distribution of noise. For example, cue combination studies make the reasonable assumption that measurements are perturbed by additive zero-mean Gaussian noise, and infer the width of the likelihood function from psychophysical thresholds
<xref rid="R25" ref-type="bibr">25</xref>
,
<xref rid="R26" ref-type="bibr">26</xref>
. Alternatively, it is possible to model the likelihood function based on the uncertainty associated with external noise in the stimulus
<xref rid="R16" ref-type="bibr">16</xref>
,
<xref rid="R27" ref-type="bibr">27</xref>
,
<xref rid="R28" ref-type="bibr">28</xref>
. We modelled the likelihood based on the assumption that the distribution of measurements associated with a sample interval was Gaussian, was centered on the sample interval and had a standard deviation that scaled with the mean (see Methods).</p>
<p id="P30">To tease apart the roles of the likelihood function and the prior, it is important to be able to vary them independently. To manipulate likelihoods, one common strategy is to control factors that change psychophysical thresholds, such as varying the external noise in the stimulus
<xref rid="R16" ref-type="bibr">16</xref>
,
<xref rid="R27" ref-type="bibr">27</xref>
. In our work, we exploited the scalar variability of timing to manipulate likelihoods. This property which arises from internal noise only and is known to hold across tasks and species
<xref rid="R2" ref-type="bibr">2</xref>
<xref rid="R4" ref-type="bibr">4</xref>
for the range of times we used
<xref rid="R10" ref-type="bibr">10</xref>
allowed us to manipulate the likelihood function simply by changing the sample interval. To manipulate the prior independently, we collected data using three discrete Uniform prior distributions. The priors were partially overlapping so that certain sample intervals were tested for two or three different priors, which enabled us to evaluate the effect of the prior independent of the likelihood function.</p>
<p id="P31">To convert the posterior distribution to an estimate, we needed to specify the cost function associated with the estimator. We considered two possibilities: (i) a cost function that penalizes all erroneous estimates similarly, which corresponds the mode of the posterior (MAP), and (ii) a cost function that penalizes errors by the square of their magnitude, which corresponds to the mean of the posterior (BLS). We also considered a non-Bayesian ML estimator that ignores the prior altogether, and chooses the peak of the likelihood function for the estimate. To decide which of these estimators better described subjects’ behavior, it proved essential to consider both the bias and the variability of production times. This technique, which was originally introduced to estimate internal priors from psychophysical data
<xref rid="R22" ref-type="bibr">22</xref>
, provided a powerful constraint in the specification of the estimator’s mapping function.</p>
<p id="P32">We used our three-stage model to estimate the measurement and production Weber fractions, and to decide which of the three mapping rules (ML, MAP, or BLS) better captured production times
<xref rid="R29" ref-type="bibr">29</xref>
. The ML estimator clearly failed to capture the pattern of prior-dependent biases evident in every subject’s production times, as expected from any estimator that neglects the prior. By incorporating the prior, both the MAP and BLS estimators exhibited contextual biases, but the BLS consistently outperformed the MAP model in explaining the trade off between the trial-to-trial variability and bias across our subjects (
<xref rid="F6" ref-type="fig">Fig. 6b,c</xref>
). It is important to emphasize that, had we ignored the trial-to-trial variability, both BLS and MAP as well as a variety of other Bayesian models could have accounted for the prior-dependent biases in our data.</p>
<p id="P33">We also considered variants of the BLS model in which either the measurement or production noise were modelled as Gaussian with a fixed standard deviation (not scalar). Overall, our original model outperformed these alternatives (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Table S1</xref>
) because the measurement and production Weber fractions played relatively independent roles in controlling the increasing bias and variance of production times with sample interval (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. 5</xref>
). The degrading effect of formulating noise with a fixed standard deviation was more severe for the measurement stage than it was for the production stage (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Table S1</xref>
).</p>
<p id="P34">Despite the success of our modelling exercise, further validation is required to substantiate the role of a BLS mapping in interval timing. Four considerations deserve scrutiny. First, formulation of the likelihood function might take into account factors other than scalar variability that could alter measurement noise. For example, task difficulty or reinforcement schedule (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. S3</xref>
) could motivate subjects to pay more attention to certain intervals, and to measure them more reliably, which could in turn strengthen the role of the likelihood function relative to the prior. Therefore, it is important to consider attention and other related cognitive factors as an integral part of how the nervous system could balance the relative effects of the likelihood function and the prior. Second, knowledge of the prior is itself subject to uncertainty, and the internalized prior distribution may differ from the one imposed experimentally. Third, the feedback subjects receive is likely to interact with the mapping rule they adopt. Our feedback schedule did not encourage the use of BLS rule, but we cannot rule out the possibility that it influenced subjects’ behavior. Fourth, although the operation of a Bayesian estimator is formulated deterministically, its neural implementation is likely subject to biological noise. These different sources of variability must be parsed out before the estimator can be characterized definitively. These considerations, which concern all Bayesian models of psychophysical data, highlight the gap between ‘normative’ descriptions and their biological implementation.</p>
<p id="P35">We referred to our model as a Bayesian observer, and not a Bayesian observer-actor because our formulation was only concerned with making optimal estimates. But since the full task of the observer was to reproduce those estimated intervals, we can formulate a Bayesian observer-actor whose objective is to directly optimize production times, and not the intervening estimates. This model has to take into account both the measurement and production uncertainty and integrate them with prior to compute the probability of every possible pair of sample and production interval. It would then use this joint posterior to minimize the cost of producing erroneous intervals. The derivations associated with the Bayesian observer-actor model are more involved and beyond the scope of the present work. Yet, we note that under suitable assumptions, the two models would behave similarly (see Methods).</p>
</sec>
<sec id="S7">
<title>Context-dependent central timing</title>
<p id="P36">Our finding suggests that the brain takes into account knowledge of temporal uncertainty and adapts its time keeping mechanisms to temporal statistics in the environment. What neural computations may lead to such sophisticated behavior? One possibility is that the brain implements a formal Bayesian algorithm. For example, populations of neurons might maintain an internal representation of the prior distribution and the likelihood function, multiply them to represent a posterior and produce an estimate by approximating its expectation. Related variants of this scheme are also conceivable. For instance, our results could be accommodated by an ML strategy if the prior would exert its effect indirectly by changing the statistics of noise associated with measurements. Another more attractive possibility that obviates the need for explicit representations of the likelihood function and the prior is for the brain to learn the sensorimotor transformation that would map measurements onto their corresponding Bayesian estimates directly
<xref rid="R30" ref-type="bibr">30</xref>
. This is what our observer model exemplifies: it establishes a deterministic nonlinear mapping function to directly transform measurements to estimates. Evidently, this form of learning must incorporate knowledge about (1) scalar variability, and (2) prior distribution.</p>
<p id="P37">Electrophysiological recordings from sensorimotor structures in monkeys have described computations akin to those our observer model utilizes. For instance, parietal association regions and subcortical neurons in Caudate have been shown to reflect flexible sensorimotor associations
<xref rid="R31" ref-type="bibr">31</xref>
,
<xref rid="R32" ref-type="bibr">32</xref>
. The time course of activity across sensorimotor neurons is believed to represent sensory evidence
<xref rid="R33" ref-type="bibr">33</xref>
, its integration with the prior information
<xref rid="R34" ref-type="bibr">34</xref>
, and the preparatory signals in anticipation of instructed and self-generated action
<xref rid="R35" ref-type="bibr">35</xref>
<xref rid="R37" ref-type="bibr">37</xref>
. The importance of sensorimotor structures in time reproduction is further reinforced by their consistent activation in human neuroimaging studies that involve time sensitive computations
<xref rid="R38" ref-type="bibr">38</xref>
<xref rid="R41" ref-type="bibr">41</xref>
.</p>
<p id="P38">A variety of models have been proposed to explain the perception and use of an interval of time. Information theoretic models attribute the sense of time to the accumulation of tics from a central clock
<xref rid="R11" ref-type="bibr">11</xref>
,
<xref rid="R42" ref-type="bibr">42</xref>
,
<xref rid="R43" ref-type="bibr">43</xref>
; physiological studies have noted a general role for rising neural activity for tracking elapsed time in the brain
<xref rid="R36" ref-type="bibr">36</xref>
,
<xref rid="R37" ref-type="bibr">37</xref>
,
<xref rid="R44" ref-type="bibr">44</xref>
<xref rid="R48" ref-type="bibr">48</xref>
, and biophysical models have been developed that suggest that time may be represented through the dynamics of neuronal network
<xref rid="R49" ref-type="bibr">49</xref>
. Our work, which does not commit to a specific neural implementation, suggests that the internal sense of elapsed time in the sub-second to seconds range may arise from a plastic sensorimotor process that enables us to operate efficiently in different temporal contexts.</p>
</sec>
</sec>
<sec sec-type="methods" id="S8">
<title>Methods</title>
<sec id="S9">
<title>Psychophysical procedures</title>
<p id="P39">Six human subjects aged 19 to 40 yr participated in this study after giving informed consent. All had normal or corrected-to-normal vision, and all were naïve to the purpose of the experiment. Subjects viewed all stimuli binocularly from a distance of 52 cm on an 17-inch iiyama AS4311U LCD monitor at a resolution of 1024×768 driven by a Intel Macintosh G5 computer at a refresh rate of 85 Hz in a dark, quiet room.</p>
<p id="P40">In a “Ready-Set-Go” time reproduction task, subjects measured certain sample intervals demarcated by a pair of flashed stimuli, and reproduced those intervals by producing time-sensitive manual responses. Each trial began with the presentation of a central fixation point (FP) for 1 s, followed by the presentation of a warning stimulus at a variable distance to the left of the FP. After a variable delay ranging from 0.25 to 0.85 s drawn randomly from a truncated exponential distribution, two 100 ms flashes separated by the sample interval,
<italic>t
<sub>s</sub>
</italic>
, were presented. The first flash, which signified the “Ready” stimulus, was presented at the same distance as the warning stimulus but to the right of the FP. The “Set” stimulus was presented
<italic>t
<sub>s</sub>
</italic>
ms afterwards and 5 deg above the FP (
<xref rid="F1" ref-type="fig">Fig. 1a</xref>
). Subjects were instructed to measure and reproduce the sample interval by pressing the space bar on the keyboard
<italic>t
<sub>s</sub>
</italic>
ms after the presentation of the “Set”. Production times,
<italic>t
<sub>p</sub>
</italic>
, were measured from the center of the “Set” flash (i.e. 50 ms after its onset) to when the key was pressed. When
<italic>t
<sub>p</sub>
</italic>
was sufficiently close to
<italic>t
<sub>s</sub>
</italic>
, the warning stimulus changed from white to green to provide positive feedback and encourage stable performance.</p>
<p id="P41">All stimuli were circular in shape and were presented on a dark grey background. Except for FP that subtended 0.5 deg of visual angle, all other stimuli were 1.5 deg. To ensure that subjects could not use the layout of the stimuli to adopt a spatial strategy for the time reproduction task (e.g. track an imaginary moving target), we varied the distance of the “Ready” and the warning stimulus from the FP on each trial (range 7.5 to 12.5 deg).</p>
<p id="P42">For each subject, three experimental conditions were tested separately. These conditions were the same in all respects except that, for each condition, the sample intervals were drawn from a different prior probability distribution. All priors were discrete Uniform distributions with 11 values, ranging from 494 to 847 ms for the “Short”, 671 to 1023 ms for the “Intermediate”, and 847 to 1200 ms for “Long” prior condition. Note that to help tease apart the effects of prior condition from sample interval, the priors were chosen to be partially overlapping.</p>
<p id="P43">For each subject, the order in which the three prior conditions were tested was randomized. For each prior condition, subjects were tested after they completed an initial learning stage. Learning was considered complete when the variance and bias of the production times had stabilized (less than 10% change between sessions). The main data for each prior condition were collected in two sessions after leaning for that condition was complete. Learning for each subsequent prior condition started after testing for the preceding prior condition was completed. For 5 out of 6 subjects, the learning was completed by the end of the first session (less than 10% change between first and second sessions). For one subject, learning of the first prior condition was completed after 4 sessions. For this subject, the 5
<sup>th</sup>
and 6
<sup>th</sup>
sessions provided data for the first prior condition. For the other two prior conditions, similar to other subjects, responses stabilized after one practice session. All subjects typically participated in 3 sessions per week, and each sessions lasted ~45 minutes (i.e. nearly 500 trials).</p>
<p id="P44">Subjects received positive feedback for responses that fell within a specified window around
<italic>t
<sub>s</sub>
</italic>
(i.e., “correct” trials). To compensate for the increased difficulty associated with longer sample intervals – a natural consequence of scalar timing variability
<xref rid="R2" ref-type="bibr">2</xref>
<xref rid="R4" ref-type="bibr">4</xref>
– the width of this window was scaled with the sample interval with a constant of proportionality,
<italic>k</italic>
. To ensure that the performance was comparable across different prior conditions, the value of
<italic>k</italic>
was controlled by an adaptive one-up one-down procedure which added/subtracted 0.015 to/from
<italic>k</italic>
for each “miss”/“correct” trial. As such, every subject’s performance for every session yielded approximately 50% positively reinforced trials (mean = 51.7%; std=1.33%). For each prior condition, the maximum (minimum) number of “correct” trials corresponded to the intermediate (extreme) sample intervals (
<xref rid="SD1" ref-type="supplementary-material">Supplementary Fig. S3</xref>
).</p>
</sec>
<sec id="S10">
<title>The Bayesian estimator</title>
<p id="P45">The noise distribution associated with the measurement stage of the model, determines the distribution of
<italic>t
<sub>m</sub>
</italic>
for a given
<italic>t
<sub>s</sub>
</italic>
,
<italic>p</italic>
(
<italic>t
<sub>m</sub>
</italic>
|
<italic>t
<sub>s</sub>
</italic>
). From the perspective of the observer who makes a measurement
<italic>t
<sub>m</sub>
</italic>
but does not know
<italic>t
<sub>s</sub>
</italic>
, this relationship becomes a function of
<italic>t
<sub>s</sub>
</italic>
known as the likelihood function,
<italic>λ
<sub>t
<sub>m</sub>
</sub>
</italic>
(
<italic>t
<sub>s</sub>
</italic>
) ≡
<italic>p</italic>
(
<italic>t
<sub>m</sub>
</italic>
|
<italic>t
<sub>s</sub>
</italic>
) in which
<italic>t
<sub>m</sub>
</italic>
is fixed. We modelled
<italic>p</italic>
(
<italic>t
<sub>m</sub>
</italic>
|
<italic>t
<sub>s</sub>
</italic>
) as a Gaussian distribution with mean
<italic>t
<sub>s</sub>
</italic>
, and standard deviation
<italic>w
<sub>m</sub>
t
<sub>s</sub>
</italic>
that scaled linearly with
<italic>t
<sub>s</sub>
</italic>
(scalar variability) with a constant coefficient of variation,
<italic>w
<sub>m</sub>
</italic>
.</p>
<disp-formula id="FD3">
<label>(1)</label>
<mml:math id="M3" display="block" overflow="scroll">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>π</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</disp-formula>
<p id="P46">Similarly, the production noise distribution,
<italic>p</italic>
(
<italic>t
<sub>p</sub>
</italic>
|
<italic>t
<sub>e</sub>
</italic>
), was assumed to be Gaussian with zero mean and a constant coefficient of variation,
<italic>w
<sub>p</sub>
</italic>
:
<disp-formula id="FD4">
<label>(2)</label>
<mml:math id="M4" display="block" overflow="scroll">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>π</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P47">To simplify derivations, we modelled the discrete uniform prior distributions used in the experiment as continuous. For each prior condition, we specified the domain of sample intervals between
<inline-formula>
<mml:math id="M5" overflow="scroll">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>min</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="M6" overflow="scroll">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>max</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
based to the minimum and maximum values used in the experiment.</p>
<disp-formula id="FD5">
<label>(3)</label>
<mml:math id="M7" display="block" overflow="scroll">
<mml:mrow>
<mml:mi>π</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable columnalign="left">
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>max</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>min</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mtext>for</mml:mtext>
<mml:mspace width="0.38889em"></mml:mspace>
<mml:mspace width="0.38889em"></mml:mspace>
<mml:mspace width="0.38889em"></mml:mspace>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>min</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>max</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mtext>otherwise</mml:mtext>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p id="P48">The resulting posterior,
<italic>π</italic>
(
<italic>t
<sub>s</sub>
</italic>
|
<italic>t
<sub>m</sub>
</italic>
), is the product of the prior multiplied by the likelihood function and appropriately normalized:
<disp-formula id="FD6">
<label>(4)</label>
<mml:math id="M8" display="block" overflow="scroll">
<mml:mrow>
<mml:mi>π</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>π</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>π</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable columnalign="left">
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>min</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>max</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:msubsup>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mtext>for</mml:mtext>
<mml:mspace width="0.38889em"></mml:mspace>
<mml:mspace width="0.38889em"></mml:mspace>
<mml:mspace width="0.38889em"></mml:mspace>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>min</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>max</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mtext>otherwise</mml:mtext>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P49">The Bayesian estimator computes a single estimate,
<italic>t
<sub>e</sub>
</italic>
, from the posterior by considering an objective cost function,
<italic>l</italic>
(
<italic>t
<sub>e</sub>
</italic>
,
<italic>t
<sub>s</sub>
</italic>
), that quantifies the cost of erroneously estimating
<italic>t
<sub>s</sub>
</italic>
as
<italic>t
<sub>e</sub>
</italic>
. The Bayesian estimate minimizes the posterior expected loss, which is the integral of the cost function for each
<italic>t
<sub>s</sub>
</italic>
, weighted by its posterior probability,
<italic>π</italic>
(
<italic>t
<sub>s</sub>
</italic>
|
<italic>t
<sub>m</sub>
</italic>
):
<disp-formula id="FD7">
<label>(5)</label>
<mml:math id="M9" display="block" overflow="scroll">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo>arg</mml:mo>
<mml:mo>min</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>π</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P50">Notice that the optimal estimate,
<italic>t
<sub>e</sub>
</italic>
, is a deterministic function of the measured sample,
<italic>f
<sub>l</sub>
</italic>
(
<italic>t
<sub>m</sub>
</italic>
) in which the subscript
<italic>l</italic>
reflects the particular cost/loss function.</p>
<p id="P51">For the ML model, the estimator
<italic>f
<sub>ML</sub>
</italic>
(
<italic>t
<sub>m</sub>
</italic>
) is associated with the sample interval that maximizes the likelihood function, which can be derived from
<xref rid="FD3" ref-type="disp-formula">Equation (1)</xref>
:
<disp-formula id="FD8">
<label>(6)</label>
<mml:math id="M10" display="block" overflow="scroll">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo>arg</mml:mo>
<mml:mo>max</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:msub>
<mml:mrow>
<mml:mi>λ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:mn>4</mml:mn>
<mml:msubsup>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msubsup>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P52">The ML estimate is proportional to measurement. For a plausible range of values for
<italic>w
<sub>m</sub>
</italic>
, the constant of proportionality would be less than one, and thus the ML would systematically underestimate the sample. For instance, for 0.1<
<italic>w
<sub>m</sub>
</italic>
< 0.3, the constant of proportionality would vary between 0.99 and 0.92.</p>
<p id="P53">For the MAP rule, the cost function is −
<italic>δ</italic>
(
<italic>t
<sub>e</sub>
</italic>
<italic>t
<sub>s</sub>
</italic>
), where
<italic>δ</italic>
(.) denotes the Dirac delta function. The corresponding estimator function,
<italic>f
<sub>MAP</sub>
</italic>
(
<italic>t
<sub>m</sub>
</italic>
), is specified by the mode of the posterior as follows:
<disp-formula id="FD9">
<label>(7)</label>
<mml:math id="M11" display="block" overflow="scroll">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">MAP</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo>arg</mml:mo>
<mml:mo>max</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mi>π</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable columnalign="left">
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>min</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mtext>for</mml:mtext>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>min</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mtext>for</mml:mtext>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>min</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>max</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>max</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:mtext>for</mml:mtext>
</mml:mrow>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>max</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P54">For the BLS rule, the cost function is the squared error, (
<italic>t
<sub>e</sub>
</italic>
<italic>t
<sub>s</sub>
</italic>
)
<xref rid="R2" ref-type="bibr">2</xref>
, and the estimator function,
<italic>f
<sub>BLS</sub>
</italic>
(
<italic>t
<sub>m</sub>
</italic>
) corresponds to the mean of the posterior:
<disp-formula id="FD10">
<label>(8)</label>
<mml:math id="M12" display="block" overflow="scroll">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">BLS</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>min</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>max</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:msubsup>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>min</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>max</mml:mo>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:msubsup>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
</sec>
<sec id="S11">
<title>The Bayesian observer model</title>
<p id="P55">The Bayesian estimator specifies a deterministic mapping from a measurement,
<italic>t
<sub>m</sub>
</italic>
to an estimate,
<italic>t
<sub>e</sub>
</italic>
. But our psychophysical data consists of pairs of sample interval,
<italic>t
<sub>s</sub>
</italic>
and production time,
<italic>t
<sub>p</sub>
</italic>
. Accordingly, we augmented the estimator with a measurement stage and a production stage, which together with the estimator, provide a complete characterization of the relationship between
<italic>t
<sub>s</sub>
</italic>
and
<italic>t
<sub>p</sub>
</italic>
. The model however relies on two intermediate variables
<italic>t
<sub>m</sub>
</italic>
and
<italic>t
<sub>e</sub>
</italic>
that are psychophysically unobservable (i.e. hidden variables). To remove these variables from the description of the model, we took advantage of a trick common to Bayesian inference, which is to integrate out the hidden variables (i.e. marginalization). Specifically, using the chain rule, we decomposed the joint conditional distribution of variables
<italic>t
<sub>m</sub>
</italic>
,
<italic>t
<sub>e</sub>
</italic>
, and
<italic>t
<sub>p</sub>
</italic>
to three intervening conditional probabilities:
<disp-formula id="FD11">
<label>(9)</label>
<mml:math id="M13" display="block" overflow="scroll">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P56">We used the serial architecture of our model (
<xref rid="F3" ref-type="fig">Fig. 3a</xref>
) to simplify the dependencies in the right hand side of
<xref rid="FD11" ref-type="disp-formula">Equation (9)</xref>
. In the first term, because the conditional probability of
<italic>t
<sub>p</sub>
</italic>
is fully specified by
<italic>t
<sub>e</sub>
</italic>
and
<italic>w
<sub>p</sub>
</italic>
(from
<xref rid="FD4" ref-type="disp-formula">Equation (2)</xref>
), we can safely omit the other conditional variables (
<italic>t
<sub>m</sub>
</italic>
,
<italic>t
<sub>s</sub>
</italic>
and
<italic>w
<sub>m</sub>
</italic>
). In the second term, the only relevant conditional variable is
<italic>t
<sub>m</sub>
</italic>
since it specifies
<italic>t
<sub>e</sub>
</italic>
deterministically. And for the third term,
<italic>w
<sub>p</sub>
</italic>
has no bearing on
<italic>t
<sub>m</sub>
</italic>
. Incorporating these simplifications, the joint conditional distribution can be rewritten as follows:
<disp-formula id="FD12">
<label>(10)</label>
<mml:math id="M14" display="block" overflow="scroll">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P57">Moreover, because
<italic>t
<sub>e</sub>
</italic>
is a deterministic function of
<italic>t
<sub>m</sub>
</italic>
; i.e.,
<italic>t
<sub>e</sub>
</italic>
=
<italic>f</italic>
(
<italic>t
<sub>m</sub>
</italic>
), the conditional probability
<italic>p</italic>
(
<italic>t
<sub>e</sub>
</italic>
|
<italic>t
<sub>m</sub>
</italic>
) can be written as a Dirac delta function as follows:
<disp-formula id="FD13">
<label>(11)</label>
<mml:math id="M15" display="block" overflow="scroll">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>δ</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>f</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P58">We can eliminate the dependence on the two hidden variables
<italic>t
<sub>m</sub>
</italic>
and
<italic>t
<sub>e</sub>
</italic>
by marginalization:
<disp-formula id="FD14">
<label>(12)</label>
<mml:math id="M16" display="block" overflow="scroll">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>δ</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>f</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>f</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
<p id="P59">The integrand is the product of the conditional probability distributions associated with the measurement and production stages. By substituting these distributions from
<xref rid="FD3" ref-type="disp-formula">Equations (1)</xref>
and
<xref rid="FD4" ref-type="disp-formula">(2)</xref>
, and
<italic>f</italic>
(
<italic>t
<sub>m</sub>
</italic>
) from
<xref rid="FD8" ref-type="disp-formula">Equation (6)</xref>
,
<xref rid="FD9" ref-type="disp-formula">(7)</xref>
, or
<xref rid="FD10" ref-type="disp-formula">(8)</xref>
(depending on the estimator of interest),
<xref rid="FD14" ref-type="disp-formula">Equation (12)</xref>
provides the conditional probability of
<italic>t
<sub>p</sub>
</italic>
for a given
<italic>t
<sub>s</sub>
</italic>
as a function of the model parameters,
<italic>w
<sub>m</sub>
</italic>
and
<italic>w
<sub>p</sub>
</italic>
.</p>
</sec>
<sec id="S12">
<title>The Bayesian observer-actor model</title>
<p id="P60">The observer model described in the previous section obtains an estimate that minimizes a cost built around the estimate and the actual time interval. It was formulated to minimize the expected loss associated with erroneous estimates, not production times. A more elaborate Bayesian “observer-actor” model would seek to minimize expected loss with respect to the ensuing production times (and not the intervening estimates). This elaboration demands two considerations. First, the uncertainty associated with both the measurement and the production phases must be taken in to account. As such, the relevant probability distribution would be the joint posterior of the sample interval and production time conditioned on the measurement,
<italic>π</italic>
(
<italic>t
<sub>p</sub>
</italic>
,
<italic>t
<sub>s</sub>
</italic>
|
<italic>t
<sub>m</sub>
</italic>
). Second, the definition of the cost function should concern the sample interval and production time; i.e.,
<italic>l</italic>
(
<italic>t
<sub>p</sub>
</italic>
,
<italic>t
<sub>s</sub>
</italic>
). The appropriate posterior expected loss could then be minimized as follows:
<disp-formula id="FD15">
<label>(13)</label>
<mml:math id="M17" display="block" overflow="scroll">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo>arg</mml:mo>
<mml:mo>min</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>π</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P61">The Delta and least squares cost functions in this optimization problem do not correspond to the mode and mean of the joint posterior, and derivation of the optimal solution is more involved and beyond the scope of this manuscript. Nonetheless, we note that the corresponding estimators for the Bayesian observer-actor are qualitatively similar to those we derived for the MAP and BLS mapping rules in our simplified Bayesian observer model.</p>
</sec>
<sec id="S13">
<title>Fitting the model to the data</title>
<p id="P62">We assumed that
<italic>t
<sub>p</sub>
</italic>
values associated with any
<italic>t
<sub>s</sub>
</italic>
were independent across trials, and thus expressed the joint conditional probability of individual
<italic>t
<sub>p</sub>
</italic>
values across all
<italic>N</italic>
trials, and across the three prior conditions, by the product of their individual conditional probabilities:
<disp-formula id="FD16">
<label>(14)</label>
<mml:math id="M18" display="block" overflow="scroll">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mn>1</mml:mn>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mn>3</mml:mn>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mtext>K</mml:mtext>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P63">The products change to sums by taking the logarithm of both sides:
<disp-formula id="FD17">
<label>(15)</label>
<mml:math id="M19" display="block" overflow="scroll">
<mml:mrow>
<mml:mo>log</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mn>1</mml:mn>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mn>3</mml:mn>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mtext>K</mml:mtext>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mo>log</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>i</mml:mi>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>m</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mi>p</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P64">Each term in the sum was derived from
<xref rid="FD14" ref-type="disp-formula">Equation (12)</xref>
, after substituting
<italic>f</italic>
(
<italic>t
<sub>m</sub>
</italic>
) with the appropriate estimator function (
<xref rid="FD8" ref-type="disp-formula">Equation (6)</xref>
,
<xref rid="FD9" ref-type="disp-formula">(7)</xref>
or
<xref rid="FD10" ref-type="disp-formula">(8)</xref>
).</p>
<p id="P65">We used this equation to maximize the likelihood of model parameters,
<italic>w
<sub>m</sub>
</italic>
and
<italic>w
<sub>p</sub>
</italic>
, across all
<italic>t
<sub>s</sub>
</italic>
and
<italic>t
<sub>p</sub>
</italic>
values measured psychophysically. The maximization was done using ‘fminsearch’ command in MATLAB software, which incorporates the Nelder-Mead downhill simplex optimization method. Integrals of
<xref rid="FD10" ref-type="disp-formula">Equations (8)</xref>
and
<xref rid="FD14" ref-type="disp-formula">(12)</xref>
are not analytically solvable and were thus approximated numerically using the trapezoidal rule. We evaluated the success of the fitting exercise by repeating the search with different initial values; the likelihood function near the fitted parameters was highly concave, and the fitting procedure was stable with respect to initial values.</p>
</sec>
</sec>
<sec sec-type="supplementary-material" id="S14">
<title>Supplementary Material</title>
<supplementary-material content-type="local-data" id="SD1">
<label>1</label>
<media xlink:href="NIHMS209209-supplement-1.pdf" mimetype="application" mime-subtype="pdf" xlink:type="simple" id="d37e4009" position="anchor"></media>
</supplementary-material>
</sec>
</body>
<back>
<ack id="S16">
<p>This work was supported by a fellowship from Helen Hay Whitney Foundation, HHMI, and research grants EY11378 and RR000166 from the NIH. We are grateful to G. Horwitz (G.H.) for sharing resources and to G.H. and V. de Lafuente for their feedback on the manuscript.</p>
</ack>
<fn-group>
<fn id="FN2" fn-type="con">
<p>
<bold>Contributions</bold>
</p>
<p>M.J. designed the experiment, collected and analyzed the data and performed the computational modelling. M.N.S. helped in data analysis and provided intellectual support throughout the study. M.J. and M.N.S. wrote the manuscript.</p>
</fn>
</fn-group>
<ref-list>
<ref id="R1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mauk</surname>
<given-names>MD</given-names>
</name>
<name>
<surname>Buonomano</surname>
<given-names>DV</given-names>
</name>
</person-group>
<article-title>The neural basis of temporal processing</article-title>
<source>Annu Rev Neurosci</source>
<volume>27</volume>
<fpage>307</fpage>
<lpage>340</lpage>
<year>2004</year>
<pub-id pub-id-type="pmid">15217335</pub-id>
</element-citation>
</ref>
<ref id="R2">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gallistel</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Gibbon</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Time, rate, and conditioning</article-title>
<source>Psychol Rev</source>
<volume>107</volume>
<fpage>289</fpage>
<lpage>344</lpage>
<year>2000</year>
<pub-id pub-id-type="pmid">10789198</pub-id>
</element-citation>
</ref>
<ref id="R3">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rakitin</surname>
<given-names>BC</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Scalar expectancy theory and peak-interval timing in humans</article-title>
<source>J Exp Psychol Anim Behav Process</source>
<volume>24</volume>
<fpage>15</fpage>
<lpage>33</lpage>
<year>1998</year>
<pub-id pub-id-type="pmid">9438963</pub-id>
</element-citation>
</ref>
<ref id="R4">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brannon</surname>
<given-names>EM</given-names>
</name>
<name>
<surname>Libertus</surname>
<given-names>ME</given-names>
</name>
<name>
<surname>Meck</surname>
<given-names>WH</given-names>
</name>
<name>
<surname>Woldorff</surname>
<given-names>MG</given-names>
</name>
</person-group>
<article-title>Electrophysiological measures of time processing in infant and adult brains: Weber’s Law holds</article-title>
<source>J Cogn Neurosci</source>
<volume>20</volume>
<fpage>193</fpage>
<lpage>203</lpage>
<year>2008</year>
<pub-id pub-id-type="pmid">18275328</pub-id>
</element-citation>
</ref>
<ref id="R5">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gibbon</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Church</surname>
<given-names>RM</given-names>
</name>
</person-group>
<article-title>Comparison of variance and covariance patterns in parallel and serial theories of timing</article-title>
<source>J Exp Anal Behav</source>
<volume>57</volume>
<fpage>393</fpage>
<lpage>406</lpage>
<year>1992</year>
<pub-id pub-id-type="pmid">1602270</pub-id>
</element-citation>
</ref>
<ref id="R6">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reutimann</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yakovlev</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Fusi</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Senn</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>Climbing neuronal activity as an event-based cortical representation of time</article-title>
<source>J Neurosci</source>
<volume>24</volume>
<fpage>3295</fpage>
<lpage>3303</lpage>
<year>2004</year>
<pub-id pub-id-type="pmid">15056709</pub-id>
</element-citation>
</ref>
<ref id="R7">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Matell</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Meck</surname>
<given-names>WH</given-names>
</name>
</person-group>
<article-title>Cortico-striatal circuits and interval timing: coincidence detection of oscillatory processes</article-title>
<source>Brain Res Cogn Brain Res</source>
<volume>21</volume>
<fpage>139</fpage>
<lpage>170</lpage>
<year>2004</year>
<pub-id pub-id-type="pmid">15464348</pub-id>
</element-citation>
</ref>
<ref id="R8">
<label>8</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ahrens</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sahani</surname>
<given-names>M</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Platt</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Koller</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Singer</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Roweis</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Inferring Elapsed Time from Stochastic Neural Processes</article-title>
<source>Advances in Neural Information Processing Systems</source>
<publisher-name>MIT Press</publisher-name>
<year>2008</year>
</element-citation>
</ref>
<ref id="R9">
<label>9</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Casella</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Berger</surname>
<given-names>RL</given-names>
</name>
</person-group>
<publisher-name>Duxbury Resource Center</publisher-name>
<publisher-loc>Pacific Grove, CA</publisher-loc>
<year>2002</year>
</element-citation>
</ref>
<ref id="R10">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Miall</surname>
<given-names>RC</given-names>
</name>
</person-group>
<article-title>The precision of temporal judgement: milliseconds, many minutes, and beyond</article-title>
<source>Philos Trans R Soc Lond B Biol Sci</source>
<volume>364</volume>
<fpage>1897</fpage>
<lpage>1905</lpage>
<year>2009</year>
<pub-id pub-id-type="pmid">19487192</pub-id>
</element-citation>
</ref>
<ref id="R11">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Treisman</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Temporal discrimination and the indifference interval. Implications for a model of the “internal clock”</article-title>
<source>Psychol Monogr</source>
<volume>77</volume>
<fpage>1</fpage>
<lpage>31</lpage>
<year>1963</year>
<pub-id pub-id-type="pmid">5877542</pub-id>
</element-citation>
</ref>
<ref id="R12">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hollingworth</surname>
<given-names>HL</given-names>
</name>
</person-group>
<article-title>The central tendency of judgement</article-title>
<source>Arch Psychol</source>
<volume>4</volume>
<fpage>44</fpage>
<lpage>52</lpage>
<year>1913</year>
</element-citation>
</ref>
<ref id="R13">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parducci</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Category judgment: a range-frequency model</article-title>
<source>Psychol Rev</source>
<volume>72</volume>
<fpage>407</fpage>
<lpage>418</lpage>
<year>1965</year>
<pub-id pub-id-type="pmid">5852241</pub-id>
</element-citation>
</ref>
<ref id="R14">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Helson</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Adaptation-level as a basis for a quantitative theory of frames of reference</article-title>
<source>Psychol Rev</source>
<volume>55</volume>
<fpage>297</fpage>
<lpage>313</lpage>
<year>1948</year>
<pub-id pub-id-type="pmid">18891200</pub-id>
</element-citation>
</ref>
<ref id="R15">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kersten</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Object perception as Bayesian inference</article-title>
<source>Annu Rev Psychol</source>
<volume>55</volume>
<fpage>271</fpage>
<lpage>304</lpage>
<year>2004</year>
<pub-id pub-id-type="pmid">14744217</pub-id>
</element-citation>
</ref>
<ref id="R16">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kording</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<article-title>Bayesian integration in sensorimotor learning</article-title>
<source>Nature</source>
<volume>427</volume>
<fpage>244</fpage>
<lpage>247</lpage>
<year>2004</year>
<pub-id pub-id-type="pmid">14724638</pub-id>
</element-citation>
</ref>
<ref id="R17">
<label>17</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Richards</surname>
<given-names>W</given-names>
</name>
</person-group>
<source>Perception as Bayesian Inference</source>
<publisher-name>Cambridge University Press</publisher-name>
<publisher-loc>Cambridge</publisher-loc>
<year>1996</year>
</element-citation>
</ref>
<ref id="R18">
<label>18</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Mamassian</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>RROB</surname>
</name>
<name>
<surname>LM</surname>
</name>
</person-group>
<article-title>Bayesian modelling of visual perception</article-title>
<source>Probabilistic Models of the Brain: Perception and Neural Function</source>
<fpage>239</fpage>
<lpage>286</lpage>
<publisher-name>MIT Press</publisher-name>
<publisher-loc>Cambridge, MA</publisher-loc>
<year>2002</year>
</element-citation>
</ref>
<ref id="R19">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miyazaki</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Nozaki</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Nakajima</surname>
<given-names>Y</given-names>
</name>
</person-group>
<article-title>Testing Bayesian models of human coincidence timing</article-title>
<source>J Neurophysiol</source>
<volume>94</volume>
<fpage>395</fpage>
<lpage>399</lpage>
<year>2005</year>
<pub-id pub-id-type="pmid">15716368</pub-id>
</element-citation>
</ref>
<ref id="R20">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hudson</surname>
<given-names>TE</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Optimal compensation for temporal uncertainty in movement planning</article-title>
<source>PLoS Comput Biol</source>
<volume>4</volume>
<fpage>e1000130</fpage>
<year>2008</year>
<pub-id pub-id-type="pmid">18654619</pub-id>
</element-citation>
</ref>
<ref id="R21">
<label>21</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bernardo</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>AFM</given-names>
</name>
</person-group>
<source>Bayesian Theory</source>
<publisher-name>Wiley</publisher-name>
<publisher-loc>New York</publisher-loc>
<year>1994</year>
</element-citation>
</ref>
<ref id="R22">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stocker</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
</person-group>
<article-title>Noise characteristics and prior expectations in human visual speed perception</article-title>
<source>Nat Neurosci</source>
<volume>9</volume>
<fpage>578</fpage>
<lpage>585</lpage>
<year>2006</year>
<pub-id pub-id-type="pmid">16547513</pub-id>
</element-citation>
</ref>
<ref id="R23">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Trommershauser</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Statistical decision theory and the selection of rapid, goal-directed movements</article-title>
<source>J Opt Soc Am A Opt Image Sci Vis</source>
<volume>20</volume>
<fpage>1419</fpage>
<lpage>1433</lpage>
<year>2003</year>
<pub-id pub-id-type="pmid">12868646</pub-id>
</element-citation>
</ref>
<ref id="R24">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mamassian</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Overconfidence in an objective anticipatory motor task</article-title>
<source>Psychol Sci</source>
<volume>19</volume>
<fpage>601</fpage>
<lpage>606</lpage>
<year>2008</year>
<pub-id pub-id-type="pmid">18578851</pub-id>
</element-citation>
</ref>
<ref id="R25">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<year>2002</year>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="R26">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jacobs</surname>
<given-names>RA</given-names>
</name>
</person-group>
<article-title>Optimal integration of texture and motion cues to depth</article-title>
<source>Vision Res</source>
<volume>39</volume>
<fpage>3621</fpage>
<lpage>3629</lpage>
<year>1999</year>
<pub-id pub-id-type="pmid">10746132</pub-id>
</element-citation>
</ref>
<ref id="R27">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tassinari</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Hudson</surname>
<given-names>TE</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Combining priors and noisy visual cues in a rapid pointing task</article-title>
<source>J Neurosci</source>
<volume>26</volume>
<fpage>10154</fpage>
<lpage>10163</lpage>
<year>2006</year>
<pub-id pub-id-type="pmid">17021171</pub-id>
</element-citation>
</ref>
<ref id="R28">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Graf</surname>
<given-names>EW</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
</person-group>
<article-title>Explicit estimation of visual uncertainty in human motion processing</article-title>
<source>Vision Res</source>
<volume>45</volume>
<fpage>3050</fpage>
<lpage>3059</lpage>
<year>2005</year>
<pub-id pub-id-type="pmid">16182335</pub-id>
</element-citation>
</ref>
<ref id="R29">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kording</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<article-title>The loss function of sensorimotor learning</article-title>
<source>Proc Natl Acad Sci U S A</source>
<volume>101</volume>
<fpage>9839</fpage>
<lpage>9842</lpage>
<year>2004</year>
<pub-id pub-id-type="pmid">15210973</pub-id>
</element-citation>
</ref>
<ref id="R30">
<label>30</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Raphan</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
</person-group>
<article-title>Learning to be Bayesian without Supervision</article-title>
<source>Neural Information Processing Systems</source>
<fpage>1145</fpage>
<lpage>1152</lpage>
<publisher-name>MIT Press</publisher-name>
<year>2006</year>
</element-citation>
</ref>
<ref id="R31">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Toth</surname>
<given-names>LJ</given-names>
</name>
<name>
<surname>Assad</surname>
<given-names>JA</given-names>
</name>
</person-group>
<article-title>Dynamic coding of behaviourally relevant stimuli in parietal cortex</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>165</fpage>
<lpage>168</lpage>
<year>2002</year>
<pub-id pub-id-type="pmid">11805833</pub-id>
</element-citation>
</ref>
<ref id="R32">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lauwereyns</surname>
<given-names>J</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Feature-based anticipation of cues that predict reward in monkey caudate nucleus</article-title>
<source>Neuron</source>
<volume>33</volume>
<fpage>463</fpage>
<lpage>473</lpage>
<year>2002</year>
<pub-id pub-id-type="pmid">11832232</pub-id>
</element-citation>
</ref>
<ref id="R33">
<label>33</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shadlen</surname>
<given-names>MN</given-names>
</name>
<name>
<surname>Newsome</surname>
<given-names>WT</given-names>
</name>
</person-group>
<article-title>Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey</article-title>
<source>J Neurophysiol</source>
<volume>86</volume>
<fpage>1916</fpage>
<lpage>1936</lpage>
<year>2001</year>
<pub-id pub-id-type="pmid">11600651</pub-id>
</element-citation>
</ref>
<ref id="R34">
<label>34</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gold</surname>
<given-names>JI</given-names>
</name>
<name>
<surname>Law</surname>
<given-names>CT</given-names>
</name>
<name>
<surname>Connolly</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Bennur</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>The relative influences of priors and sensory evidence on an oculomotor decision variable during perceptual learning</article-title>
<source>J Neurophysiol</source>
<volume>100</volume>
<fpage>2653</fpage>
<lpage>2668</lpage>
<year>2008</year>
<pub-id pub-id-type="pmid">18753326</pub-id>
</element-citation>
</ref>
<ref id="R35">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Janssen</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>MN</given-names>
</name>
</person-group>
<article-title>A representation of the hazard rate of elapsed time in macaque area LIP</article-title>
<source>Nat Neurosci</source>
<volume>8</volume>
<fpage>234</fpage>
<lpage>241</lpage>
<year>2005</year>
<pub-id pub-id-type="pmid">15657597</pub-id>
</element-citation>
</ref>
<ref id="R36">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maimon</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Assad</surname>
<given-names>JA</given-names>
</name>
</person-group>
<article-title>A cognitive signal for the proactive timing of action in macaque LIP</article-title>
<source>Nat Neurosci</source>
<volume>9</volume>
<fpage>948</fpage>
<lpage>955</lpage>
<year>2006</year>
<pub-id pub-id-type="pmid">16751764</pub-id>
</element-citation>
</ref>
<ref id="R37">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schultz</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Romo</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Role of primate basal ganglia and frontal cortex in the internal generation of movements. I. Preparatory activity in the anterior striatum</article-title>
<source>Exp Brain Res</source>
<volume>91</volume>
<fpage>363</fpage>
<lpage>384</lpage>
<year>1992</year>
<pub-id pub-id-type="pmid">1483512</pub-id>
</element-citation>
</ref>
<ref id="R38">
<label>38</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meck</surname>
<given-names>WH</given-names>
</name>
<name>
<surname>Penney</surname>
<given-names>TB</given-names>
</name>
<name>
<surname>Pouthas</surname>
<given-names>V</given-names>
</name>
</person-group>
<article-title>Cortico-striatal representation of time in animals and humans</article-title>
<source>Curr Opin Neurobiol</source>
<volume>18</volume>
<fpage>145</fpage>
<lpage>152</lpage>
<year>2008</year>
<pub-id pub-id-type="pmid">18708142</pub-id>
</element-citation>
</ref>
<ref id="R39">
<label>39</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cui</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Stetson</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Montague</surname>
<given-names>PR</given-names>
</name>
<name>
<surname>Eagleman</surname>
<given-names>DM</given-names>
</name>
</person-group>
<article-title>Ready...go: Amplitude of the FMRI signal encodes expectation of cue arrival time</article-title>
<source>PLoS Biol</source>
<volume>7</volume>
<fpage>e1000167</fpage>
<year>2009</year>
<pub-id pub-id-type="pmid">19652698</pub-id>
</element-citation>
</ref>
<ref id="R40">
<label>40</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nobre</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Correa</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Coull</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>The hazards of time</article-title>
<source>Curr Opin Neurobiol</source>
<volume>17</volume>
<fpage>465</fpage>
<lpage>470</lpage>
<year>2007</year>
<pub-id pub-id-type="pmid">17709239</pub-id>
</element-citation>
</ref>
<ref id="R41">
<label>41</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rao</surname>
<given-names>SM</given-names>
</name>
<name>
<surname>Mayer</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Harrington</surname>
<given-names>DL</given-names>
</name>
</person-group>
<article-title>The evolution of brain activation during temporal processing</article-title>
<source>Nat Neurosci</source>
<volume>4</volume>
<fpage>317</fpage>
<lpage>323</lpage>
<year>2001</year>
<pub-id pub-id-type="pmid">11224550</pub-id>
</element-citation>
</ref>
<ref id="R42">
<label>42</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Allan</surname>
<given-names>LG</given-names>
</name>
</person-group>
<article-title>Perception of Time</article-title>
<source>Perception & Psychophysics</source>
<volume>26</volume>
<fpage>340</fpage>
<lpage>354</lpage>
<year>1979</year>
</element-citation>
</ref>
<ref id="R43">
<label>43</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Creelman</surname>
<given-names>CD</given-names>
</name>
</person-group>
<article-title>Human Discrimination of Auditory Duration</article-title>
<source>Journal of the Acoustical Society of America</source>
<volume>34</volume>
<fpage>582</fpage>
<year>1962</year>
</element-citation>
</ref>
<ref id="R44">
<label>44</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>IH</given-names>
</name>
<name>
<surname>Assad</surname>
<given-names>JA</given-names>
</name>
</person-group>
<article-title>Putaminal activity for simple reactions or self-timed movements</article-title>
<source>J Neurophysiol</source>
<volume>89</volume>
<fpage>2528</fpage>
<lpage>2537</lpage>
<year>2003</year>
<pub-id pub-id-type="pmid">12611988</pub-id>
</element-citation>
</ref>
<ref id="R45">
<label>45</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mita</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Mushiake</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Shima</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Matsuzaka</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Tanji</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Interval time coding by neurons in the presupplementary and supplementary motor areas</article-title>
<source>Nat Neurosci</source>
<volume>12</volume>
<fpage>502</fpage>
<lpage>507</lpage>
<year>2009</year>
<pub-id pub-id-type="pmid">19252498</pub-id>
</element-citation>
</ref>
<ref id="R46">
<label>46</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Okano</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Tanji</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Neuronal activities in the primate motor fields of the agranular frontal cortex preceding visually triggered and self-paced movement</article-title>
<source>Exp Brain Res</source>
<volume>66</volume>
<fpage>155</fpage>
<lpage>166</lpage>
<year>1987</year>
<pub-id pub-id-type="pmid">3582529</pub-id>
</element-citation>
</ref>
<ref id="R47">
<label>47</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tanaka</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Cognitive signals in the primate motor thalamus predict saccade timing</article-title>
<source>J Neurosci</source>
<volume>27</volume>
<fpage>12109</fpage>
<lpage>12118</lpage>
<year>2007</year>
<pub-id pub-id-type="pmid">17978052</pub-id>
</element-citation>
</ref>
<ref id="R48">
<label>48</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tanaka</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Inactivation of the central thalamus delays self-timed saccades</article-title>
<source>Nat Neurosci</source>
<volume>9</volume>
<fpage>20</fpage>
<lpage>22</lpage>
<year>2006</year>
<pub-id pub-id-type="pmid">16341209</pub-id>
</element-citation>
</ref>
<ref id="R49">
<label>49</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buonomano</surname>
<given-names>DV</given-names>
</name>
<name>
<surname>Maass</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>State-dependent computations: spatiotemporal processing in cortical networks</article-title>
<source>Nat Rev Neurosci</source>
<volume>10</volume>
<fpage>113</fpage>
<lpage>125</lpage>
<year>2009</year>
<pub-id pub-id-type="pmid">19145235</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>The Ready-Set-Go time reproduction task. (
<bold>a</bold>
) Sequence of events during a trial. Appearance of a central spot indicated the start of the trial. Subjects were instructed to fixate the central spot and maintain fixation throughout the trial. A white “feedback” spot was visible to the left of the fixation point. After a random delay (0.25 to 0.85 s), two briefly flashed cues – “Ready” and “Set” – were presented in sequence. Subjects were instructed to estimate the sample interval demarcated by the time between the “Ready” and “Set” cues and to reproduce it immediately afterwards. The production times were measured from the time of “Set” to the time subjects responded via a key-press. When production times were within an experimentally adjusted window around the target interval (see Methods), the feedback spot turned green to provide positive feedback. (
<bold>b</bold>
) Distribution of sample intervals. In each session, sample intervals were drawn randomly from one of three partially overlapping discrete Uniform prior distributions (i.e., “Short”, “Intermediate”, and “Long”) shown by the black, dark red and light red bar charts. Subsequent figures use the same color convention to show results associated with these prior conditions. (
<bold>c</bold>
) Feedback schedule. The width of the window for which production times were positively reinforced (green area) scaled with the sample interval (see Methods). No feedback was provided for early and late responses. The plot shows an example schedule for the “Intermediate” prior condition.</p>
</caption>
<graphic xlink:href="nihms209209f1"></graphic>
</fig>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>Time reproduction in different temporal contexts. Individual production times for every trial (small dots), and their averages for each sample interval (large circles connected with thick lines) are shown for three prior conditions for a typical subject. Average production times deviated from the line of equality (diagonal dashed line) towards the mean of the priors (horizontal dashed lines). Prior-dependent biases were strongest for the “Long” prior condition. Color conventions are the same as in
<xref rid="F1" ref-type="fig">Fig. 1b</xref>
.</p>
</caption>
<graphic xlink:href="nihms209209f2"></graphic>
</fig>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>The observer model for time reproduction. (
<bold>a</bold>
) The three-stage architecture of the model. In the first stage, the sample interval,
<italic>t
<sub>s</sub>
</italic>
, is measured. The relationship between the measured interval,
<italic>t
<sub>m</sub>
</italic>
, and
<italic>t
<sub>s</sub>
</italic>
is characterized by measurement noise,
<italic>p</italic>
(
<italic>t
<sub>m</sub>
</italic>
|
<italic>t
<sub>s</sub>
</italic>
) which was modelled as a zero-mean Gaussian function whose standard deviation grows linearly with
<italic>t
<sub>s</sub>
</italic>
(i.e. scalar variability). The second stage is the estimator; i.e., the deterministic function,
<italic>f</italic>
(
<italic>t
<sub>m</sub>
</italic>
), that maps measurement
<italic>t
<sub>m</sub>
</italic>
to estimate
<italic>t
<sub>e</sub>
</italic>
. The third stage uses
<italic>t
<sub>e</sub>
</italic>
to produce interval
<italic>t
<sub>p</sub>
</italic>
. The conditional dependence of
<italic>t
<sub>p</sub>
</italic>
on
<italic>t
<sub>e</sub>
</italic>
,
<italic>p</italic>
(
<italic>t
<sub>p</sub>
</italic>
|
<italic>t
<sub>e</sub>
</italic>
), was characterized by production noise, which was modelled by a zero-mean Gaussian distribution whose standard deviation scales linearly with
<italic>t
<sub>e</sub>
</italic>
. (
<bold>b–d</bold>
) The deterministic mapping functions associated with the ML, MAP and BLS models respectively.</p>
</caption>
<graphic xlink:href="nihms209209f3"></graphic>
</fig>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>ML, MAP and BLS estimators. The three panels schematically represent how ML, MAP and BLS estimates are computed respectively. Upward arrows in black and gray show two example sample intervals. Vertical dashed lines represent the noise-perturbed measurements associated with those sample intervals. Measured intervals differ from the corresponding samples as shown by the misalignment between the upward arrows and their corresponding vertical dashed lines. The likelihood functions associated with the two measurements are shown on far right of each panel (rotated 90 degree). These likelihood functions are plotted with respect to the measurements as shown by the reflection of the measured interval on the diagonal (horizontal dashed lines). (
<bold>a</bold>
) ML estimator. The peak of the likelihood function determines the estimate (filled circles, thick left arrow). The corresponding mapping function,
<italic>f
<sub>ML</sub>
</italic>
, for all possible measurements is shown by the solid black line with the two example cases superimposed. Because the likelihood function is Gaussian and centered on the measurements, this function is the identity function. (
<bold>b</bold>
) MAP estimator. On the right side of the panel, the posterior distributions (truncated Gaussian functions) for the two measurements are computed by multiplying their associated likelihood functions by the prior (gray bar chart). MAP estimates are computed from the mode of the posterior (filled circles). The corresponding mapping function,
<italic>f
<sub>MAP</sub>
</italic>
, is the identity function limited by the domain of the prior. (
<bold>c</bold>
) BLS estimator. Same as panel
<bold>b</bold>
except that for BLS, the mean of the posterior determines the estimate. The resulting mapping function,
<italic>f
<sub>BLS</sub>
</italic>
, is sigmoidal in shape.</p>
</caption>
<graphic xlink:href="nihms209209f4"></graphic>
</fig>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>Time reproduction behavior in humans and model observers. (
<bold>a</bold>
) For each sample interval (referred to by subscript
<italic>i</italic>
) in each prior condition, we computed two statistics: BIAS
<sub>i</sub>
and VAR
<sub>i</sub>
. BIAS
<sub>i</sub>
is the average difference between the production times and the sample interval, and VAR
<sub>i</sub>
as the corresponding variance. As an example, the plots shows how the BIAS
<sub>i</sub>
and VAR
<sub>i</sub>
were computed for the largest sample interval associated with the “Long” prior condition for one subject (same format as in
<xref rid="F2" ref-type="fig">Fig. 2</xref>
). For this distribution of production times (histogram), BIAS
<sub>i</sub>
is the difference between the solid horizontal red line and the horizontal dashed line, and VAR
<sub>i</sub>
is the corresponding variance. For each prior condition, we computed two summary statistics: BIAS is the root mean square of BIAS
<sub>i</sub>
, and VAR as the average of VAR
<sub>i</sub>
across sample intervals. (
<bold>b</bold>
) VAR
<sup>1/2</sup>
versus BIAS for three prior conditions for the same subject as in panel
<bold>a</bold>
. On a plot of VAR
<sup>1/2</sup>
against BIAS, the locus of a constant root-mean-square-error (RMSE) value is a quarter circle. Dashed quarter circles show the loci of RMSE values associated with the VAR
<sup>1/2</sup>
and BIAS derived from the subject’s production times. (
<bold>c</bold>
) Simulated production times from the best-fitted ML model to the data in
<bold>a</bold>
. (
<bold>d</bold>
) The scatter of VAR
<sup>1/2</sup>
and BIAS of the best-fitted ML model for three prior conditions (small dots) computed from 100 simulations like the one shown in
<bold>c</bold>
. The VAR
<sup>1/2</sup>
and BIAS of the subject are plotted for comparison (same as in
<bold>b</bold>
). (
<bold>e</bold>
,
<bold>f</bold>
), and (
<bold>g</bold>
,
<bold>h</bold>
) are in the same format as in (
<bold>c,d</bold>
) and show results for the best-fitted MAP and BLS models respectively. Color conventions are the same as in
<xref rid="F1" ref-type="fig">Fig. 1b</xref>
.</p>
</caption>
<graphic xlink:href="nihms209209f5"></graphic>
</fig>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>Time reproduction behavior in humans and model observers – model comparison. (
<bold>a</bold>
) Average BIAS (squares) and VAR
<sup>1/2</sup>
(circles) computed from 100 simulations of the best-fitted ML model as a function of BIAS and VAR
<sup>1/2</sup>
computed directly from psychophysical data for all 6 subjects and all 3 prior conditions. The inset shows the Weber fraction of the measurement and production noise (
<italic>w
<sub>p</sub>
</italic>
versus
<italic>w
<sub>m</sub>
</italic>
) of the best-fitted ML model for the six subjects. (
<bold>b,c</bold>
) Same as in
<bold>a</bold>
for the MAP, and BLS models respectively. In each panel, each subject contributed 6 data points; i.e. 3 prior condition (black, dark red, and light red) × 2 metrics (BIAS and VAR
<sup>1/2</sup>
).</p>
</caption>
<graphic xlink:href="nihms209209f6"></graphic>
</fig>
</floats-group>
</pmc>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
<region>
<li>Washington (État)</li>
</region>
</list>
<tree>
<country name="États-Unis">
<region name="Washington (État)">
<name sortKey="Jazayeri, Mehrdad" sort="Jazayeri, Mehrdad" uniqKey="Jazayeri M" first="Mehrdad" last="Jazayeri">Mehrdad Jazayeri</name>
</region>
<name sortKey="Shadlen, Michael N" sort="Shadlen, Michael N" uniqKey="Shadlen M" first="Michael N." last="Shadlen">Michael N. Shadlen</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001D22 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001D22 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:2916084
   |texte=   Temporal context calibrates interval timing
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:20581842" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024