Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Combining Symbolic Cues with Sensory Input and Prior Experience in an Iterative Bayesian Framework

Identifieur interne : 001815 ( Pmc/Checkpoint ); précédent : 001814; suivant : 001816

Combining Symbolic Cues with Sensory Input and Prior Experience in an Iterative Bayesian Framework

Auteurs : Frederike H. Petzschner [Allemagne] ; Paul Maier [Allemagne] ; Stefan Glasauer [Allemagne]

Source :

RBID : PMC:3417299

Abstract

Perception and action are the result of an integration of various sources of information, such as current sensory input, prior experience, or the context in which a stimulus occurs. Often, the interpretation is not trivial hence needs to be learned from the co-occurrence of stimuli. Yet, how do we combine such diverse information to guide our action? Here we use a distance production-reproduction task to investigate the influence of auxiliary, symbolic cues, sensory input, and prior experience on human performance under three different conditions that vary in the information provided. Our results indicate that subjects can (1) learn the mapping of a verbal, symbolic cue onto the stimulus dimension and (2) integrate symbolic information and prior experience into their estimate of displacements. The behavioral results are explained by to two distinct generative models that represent different structural approaches of how a Bayesian observer would combine prior experience, sensory input, and symbolic cue information into a single estimate of displacement. The first model interprets the symbolic cue in the context of categorization, assuming that it reflects information about a distinct underlying stimulus range (categorical model). The second model applies a multi-modal integration approach and treats the symbolic cue as additional sensory input to the system, which is combined with the current sensory measurement and the subjects’ prior experience (cue-combination model). Notably, both models account equally well for the observed behavior despite their different structural assumptions. The present work thus provides evidence that humans can interpret abstract symbolic information and combine it with other types of information such as sensory input and prior experience. The similar explanatory power of the two models further suggest that issues such as categorization and cue-combination could be explained by alternative probabilistic approaches.


Url:
DOI: 10.3389/fnint.2012.00058
PubMed: 22905024
PubMed Central: 3417299


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3417299

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Combining Symbolic Cues with Sensory Input and Prior Experience in an Iterative Bayesian Framework</title>
<author>
<name sortKey="Petzschner, Frederike H" sort="Petzschner, Frederike H" uniqKey="Petzschner F" first="Frederike H." last="Petzschner">Frederike H. Petzschner</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Institute for Clinical Neurosciences, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Bernstein Center for Computational Neuroscience Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Graduate School of Systemic Neurosciences, Ludwig-Maximilians-University-Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Integrated Center for Research and Treatment of Vertigo, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Maier, Paul" sort="Maier, Paul" uniqKey="Maier P" first="Paul" last="Maier">Paul Maier</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Institute for Clinical Neurosciences, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Bernstein Center for Computational Neuroscience Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Glasauer, Stefan" sort="Glasauer, Stefan" uniqKey="Glasauer S" first="Stefan" last="Glasauer">Stefan Glasauer</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Institute for Clinical Neurosciences, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Bernstein Center for Computational Neuroscience Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Graduate School of Systemic Neurosciences, Ludwig-Maximilians-University-Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Integrated Center for Research and Treatment of Vertigo, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22905024</idno>
<idno type="pmc">3417299</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3417299</idno>
<idno type="RBID">PMC:3417299</idno>
<idno type="doi">10.3389/fnint.2012.00058</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">001E14</idno>
<idno type="wicri:Area/Pmc/Curation">001E14</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001815</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Combining Symbolic Cues with Sensory Input and Prior Experience in an Iterative Bayesian Framework</title>
<author>
<name sortKey="Petzschner, Frederike H" sort="Petzschner, Frederike H" uniqKey="Petzschner F" first="Frederike H." last="Petzschner">Frederike H. Petzschner</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Institute for Clinical Neurosciences, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Bernstein Center for Computational Neuroscience Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Graduate School of Systemic Neurosciences, Ludwig-Maximilians-University-Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Integrated Center for Research and Treatment of Vertigo, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Maier, Paul" sort="Maier, Paul" uniqKey="Maier P" first="Paul" last="Maier">Paul Maier</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Institute for Clinical Neurosciences, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Bernstein Center for Computational Neuroscience Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Glasauer, Stefan" sort="Glasauer, Stefan" uniqKey="Glasauer S" first="Stefan" last="Glasauer">Stefan Glasauer</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Institute for Clinical Neurosciences, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Bernstein Center for Computational Neuroscience Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Graduate School of Systemic Neurosciences, Ludwig-Maximilians-University-Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Integrated Center for Research and Treatment of Vertigo, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Integrative Neuroscience</title>
<idno type="eISSN">1662-5145</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Perception and action are the result of an integration of various sources of information, such as current sensory input, prior experience, or the context in which a stimulus occurs. Often, the interpretation is not trivial hence needs to be learned from the co-occurrence of stimuli. Yet, how do we combine such diverse information to guide our action? Here we use a distance production-reproduction task to investigate the influence of auxiliary, symbolic cues, sensory input, and prior experience on human performance under three different conditions that vary in the information provided. Our results indicate that subjects can (1) learn the mapping of a verbal, symbolic cue onto the stimulus dimension and (2) integrate symbolic information and prior experience into their estimate of displacements. The behavioral results are explained by to two distinct generative models that represent different structural approaches of how a Bayesian observer would combine prior experience, sensory input, and symbolic cue information into a single estimate of displacement. The first model interprets the symbolic cue in the context of categorization, assuming that it reflects information about a distinct underlying stimulus range (categorical model). The second model applies a multi-modal integration approach and treats the symbolic cue as additional sensory input to the system, which is combined with the current sensory measurement and the subjects’ prior experience (cue-combination model). Notably, both models account equally well for the observed behavior despite their different structural assumptions. The present work thus provides evidence that humans can interpret abstract symbolic information and combine it with other types of information such as sensory input and prior experience. The similar explanatory power of the two models further suggest that issues such as categorization and cue-combination could be explained by alternative probabilistic approaches.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Adams, W J" uniqKey="Adams W">W. J. Adams</name>
</author>
<author>
<name sortKey="Graf, E W" uniqKey="Graf E">E. W. Graf</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Angelaki, D E" uniqKey="Angelaki D">D. E. Angelaki</name>
</author>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y. Gu</name>
</author>
<author>
<name sortKey="Deangelis, G C" uniqKey="Deangelis G">G. C. DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, P W" uniqKey="Battaglia P">P. W. Battaglia</name>
</author>
<author>
<name sortKey="Jacobs, R A" uniqKey="Jacobs R">R. A. Jacobs</name>
</author>
<author>
<name sortKey="Aslin, R N" uniqKey="Aslin R">R. N. Aslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Berniker, M" uniqKey="Berniker M">M. Berniker</name>
</author>
<author>
<name sortKey="Voss, M" uniqKey="Voss M">M. Voss</name>
</author>
<author>
<name sortKey="Kording, K" uniqKey="Kording K">K. Körding</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burge, J" uniqKey="Burge J">J. Burge</name>
</author>
<author>
<name sortKey="Girshick, A R" uniqKey="Girshick A">A. R. Girshick</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cheng, K" uniqKey="Cheng K">K. Cheng</name>
</author>
<author>
<name sortKey="Spetch, M L" uniqKey="Spetch M">M. L. Spetch</name>
</author>
<author>
<name sortKey="Hoan, A" uniqKey="Hoan A">A. Hoan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Davidoff, J" uniqKey="Davidoff J">J. Davidoff</name>
</author>
<author>
<name sortKey="Davies, I" uniqKey="Davies I">I. Davies</name>
</author>
<author>
<name sortKey="Roberson, D" uniqKey="Roberson D">D. Roberson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dehaene, S" uniqKey="Dehaene S">S. Dehaene</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Durgin, F H" uniqKey="Durgin F">F. H. Durgin</name>
</author>
<author>
<name sortKey="Akagi, M" uniqKey="Akagi M">M. Akagi</name>
</author>
<author>
<name sortKey="Gallistel, C R" uniqKey="Gallistel C">C. R. Gallistel</name>
</author>
<author>
<name sortKey="Haiken, W" uniqKey="Haiken W">W. Haiken</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Etcoff, N L" uniqKey="Etcoff N">N. L. Etcoff</name>
</author>
<author>
<name sortKey="Magee, J J" uniqKey="Magee J">J. J. Magee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fechner, G T" uniqKey="Fechner G">G.T. Fechner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Feldman, N H" uniqKey="Feldman N">N. H. Feldman</name>
</author>
<author>
<name sortKey="Griffiths, T L" uniqKey="Griffiths T">T. L. Griffiths</name>
</author>
<author>
<name sortKey="Morgan, J L" uniqKey="Morgan J">J. L. Morgan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hollingworth, H L" uniqKey="Hollingworth H">H. L. Hollingworth</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huttenlocher, J" uniqKey="Huttenlocher J">J. Huttenlocher</name>
</author>
<author>
<name sortKey="Hedges, L V" uniqKey="Hedges L">L. V. Hedges</name>
</author>
<author>
<name sortKey="Duncan, S" uniqKey="Duncan S">S. Duncan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jacobs, R A" uniqKey="Jacobs R">R. A. Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnson, J" uniqKey="Johnson J">J. Johnson</name>
</author>
<author>
<name sortKey="Vickers, Z" uniqKey="Vickers Z">Z. Vickers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jurgens, R" uniqKey="Jurgens R">R. Jürgens</name>
</author>
<author>
<name sortKey="Becker, W" uniqKey="Becker W">W. Becker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, K P" uniqKey="Kording K">K. P. Körding</name>
</author>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U. Beierholm</name>
</author>
<author>
<name sortKey="Ma, W J" uniqKey="Ma W">W. J. Ma</name>
</author>
<author>
<name sortKey="Quartz, S" uniqKey="Quartz S">S. Quartz</name>
</author>
<author>
<name sortKey="Tenenbaum, J B" uniqKey="Tenenbaum J">J. B. Tenenbaum</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, K P" uniqKey="Kording K">K. P. Körding</name>
</author>
<author>
<name sortKey="Wolpert, D M" uniqKey="Wolpert D">D. M. Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Langer, M S" uniqKey="Langer M">M. S. Langer</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liberman, A M" uniqKey="Liberman A">A. M. Liberman</name>
</author>
<author>
<name sortKey="Harris, K S" uniqKey="Harris K">K. S. Harris</name>
</author>
<author>
<name sortKey="Hoffman, H S" uniqKey="Hoffman H">H. S. Hoffman</name>
</author>
<author>
<name sortKey="Griffith, B C" uniqKey="Griffith B">B. C. Griffith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lucas, C G" uniqKey="Lucas C">C. G. Lucas</name>
</author>
<author>
<name sortKey="Griffiths, T L" uniqKey="Griffiths T">T. L. Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Muller, H J" uniqKey="Muller H">H. J. Müller</name>
</author>
<author>
<name sortKey="Reimann, B" uniqKey="Reimann B">B. Reimann</name>
</author>
<author>
<name sortKey="Krummenacher, J" uniqKey="Krummenacher J">J. Krummenacher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Petzschner, F H" uniqKey="Petzschner F">F. H. Petzschner</name>
</author>
<author>
<name sortKey="Glasauer, S" uniqKey="Glasauer S">S. Glasauer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stevens, S S" uniqKey="Stevens S">S. S. Stevens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocker, A A" uniqKey="Stocker A">A. A. Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, E P" uniqKey="Simoncelli E">E. P. Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stone, J V" uniqKey="Stone J">J. V. Stone</name>
</author>
<author>
<name sortKey="Kerrigan, I S" uniqKey="Kerrigan I">I. S. Kerrigan</name>
</author>
<author>
<name sortKey="Porrill, J" uniqKey="Porrill J">J. Porrill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Toscano, J C" uniqKey="Toscano J">J. C. Toscano</name>
</author>
<author>
<name sortKey="Mcmurray, B" uniqKey="Mcmurray B">B. McMurray</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Verstynen, T" uniqKey="Verstynen T">T. Verstynen</name>
</author>
<author>
<name sortKey="Sabes, P N" uniqKey="Sabes P">P. N. Sabes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vincent, B" uniqKey="Vincent B">B. Vincent</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Hopffgarten, A" uniqKey="Von Hopffgarten A">A. von Hopffgarten</name>
</author>
<author>
<name sortKey="Bremmer, F" uniqKey="Bremmer F">F. Bremmer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zaidel, A" uniqKey="Zaidel A">A. Zaidel</name>
</author>
<author>
<name sortKey="Turner, A H" uniqKey="Turner A">A. H. Turner</name>
</author>
<author>
<name sortKey="Angelaki, D E" uniqKey="Angelaki D">D. E. Angelaki</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Integr Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Integr Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Integr. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Integrative Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5145</issn>
<publisher>
<publisher-name>Frontiers Research Foundation</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22905024</article-id>
<article-id pub-id-type="pmc">3417299</article-id>
<article-id pub-id-type="doi">10.3389/fnint.2012.00058</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Combining Symbolic Cues with Sensory Input and Prior Experience in an Iterative Bayesian Framework</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Petzschner</surname>
<given-names>Frederike H.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Maier</surname>
<given-names>Paul</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Glasauer</surname>
<given-names>Stefan</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Institute for Clinical Neurosciences, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Bernstein Center for Computational Neuroscience Munich</institution>
<country>Munich, Germany</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Graduate School of Systemic Neurosciences, Ludwig-Maximilians-University-Munich</institution>
<country>Munich, Germany</country>
</aff>
<aff id="aff4">
<sup>4</sup>
<institution>Integrated Center for Research and Treatment of Vertigo, Ludwig-Maximilians-University Munich</institution>
<country>Munich, Germany</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Zhuanghua Shi, Ludwig-Maximilians-Universität München, Germany</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Toemme Noesselt, Otto-von-Guericke-University, Germany; David R. Wozny, Carnegie Mellon University, USA</p>
</fn>
<corresp id="fn001">*Correspondence: Frederike H. Petzschner, Institute for Clinical Neurosciences, Ludwig-Maximilians-Universität, Marchioninistrasse 23, 81377 München, Germany. e-mail:
<email xlink:type="simple">fpetzschner@lrz.uni-muenchen.dex</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>13</day>
<month>8</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<volume>6</volume>
<elocation-id>58</elocation-id>
<history>
<date date-type="received">
<day>11</day>
<month>5</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>7</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2012 Petzschner, Maier and Glasauer.</copyright-statement>
<copyright-year>2012</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement">
<license-p>This is an open-access article distributed under the terms of the
<uri xlink:type="simple" xlink:href="http://creativecommons.org/licenses/by/3.0/">Creative Commons Attribution License</uri>
, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.</license-p>
</license>
</permissions>
<abstract>
<p>Perception and action are the result of an integration of various sources of information, such as current sensory input, prior experience, or the context in which a stimulus occurs. Often, the interpretation is not trivial hence needs to be learned from the co-occurrence of stimuli. Yet, how do we combine such diverse information to guide our action? Here we use a distance production-reproduction task to investigate the influence of auxiliary, symbolic cues, sensory input, and prior experience on human performance under three different conditions that vary in the information provided. Our results indicate that subjects can (1) learn the mapping of a verbal, symbolic cue onto the stimulus dimension and (2) integrate symbolic information and prior experience into their estimate of displacements. The behavioral results are explained by to two distinct generative models that represent different structural approaches of how a Bayesian observer would combine prior experience, sensory input, and symbolic cue information into a single estimate of displacement. The first model interprets the symbolic cue in the context of categorization, assuming that it reflects information about a distinct underlying stimulus range (categorical model). The second model applies a multi-modal integration approach and treats the symbolic cue as additional sensory input to the system, which is combined with the current sensory measurement and the subjects’ prior experience (cue-combination model). Notably, both models account equally well for the observed behavior despite their different structural assumptions. The present work thus provides evidence that humans can interpret abstract symbolic information and combine it with other types of information such as sensory input and prior experience. The similar explanatory power of the two models further suggest that issues such as categorization and cue-combination could be explained by alternative probabilistic approaches.</p>
</abstract>
<kwd-group>
<kwd>pre-cueing</kwd>
<kwd>path integration</kwd>
<kwd>cue-combination</kwd>
<kwd>multi-modal</kwd>
<kwd>categorization</kwd>
<kwd>experience-dependent prior</kwd>
<kwd>magnitude reproduction</kwd>
<kwd>iterative Bayes</kwd>
</kwd-group>
<counts>
<fig-count count="7"></fig-count>
<table-count count="0"></table-count>
<equation-count count="50"></equation-count>
<ref-count count="35"></ref-count>
<page-count count="18"></page-count>
<word-count count="15497"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>Introduction</title>
<p>Because the demands in natural tasks are highly complex but sensory information is corrupted by noise, humans are versed in exploiting contextual information. To improve efficiency, reduce the amount of computational costs, and allow fast adaption to the outside world, we infer existing dependencies and combine relevant information to guide our perception and action. The sources of information can vary from the simultaneous input coming from different senses (Ernst and Bülthoff,
<xref ref-type="bibr" rid="B12">2004</xref>
; Angelaki et al.,
<xref ref-type="bibr" rid="B2">2009</xref>
) or distinct input from one sensory modality (Jacobs,
<xref ref-type="bibr" rid="B18">1999</xref>
; Stone et al.,
<xref ref-type="bibr" rid="B30">2009</xref>
), over short and long-term experience (Adams et al.,
<xref ref-type="bibr" rid="B1">2004</xref>
; Stocker and Simoncelli,
<xref ref-type="bibr" rid="B29">2006</xref>
; Verstynen and Sabes,
<xref ref-type="bibr" rid="B32">2011</xref>
), to abstract expectations and contextual cues in the environment (Langer and Bülthoff,
<xref ref-type="bibr" rid="B23">2001</xref>
).</p>
<p>A possible framework for combining these diverse sources of uncertain information is offered by Bayesian probability theory, which has proven applicable to several of the mentioned issues. It provides a normative, mathematical description of how various sources of information can be merged to obtain a statistically optimal estimate of their cause in the presence of uncertainty. One of the most common applications of the Bayesian approach is multi-modal cue integration, where the provided information about a stimulus results from different sensory modalities, such as vision, audition, or proprioception (Ernst and Banks,
<xref ref-type="bibr" rid="B11">2002</xref>
; Battaglia et al.,
<xref ref-type="bibr" rid="B3">2003</xref>
; Körding et al.,
<xref ref-type="bibr" rid="B21">2007</xref>
).</p>
<p>Senses, however, are not the only source of information that determines our perception. Contextual and symbolic cues can also contribute as a new source of information. In visual search paradigms, contextual cues are known to influence reaction times (e.g., Müller et al.,
<xref ref-type="bibr" rid="B26">2003</xref>
; Vincent,
<xref ref-type="bibr" rid="B33">2011</xref>
). The context can also lead to an internal organization of stimuli into distinct categories that influence perception by leading to an increased ability to discriminate between categories at the expense of discriminability within categories. Examples for category effects range from the perception of speech sounds (Liberman et al.,
<xref ref-type="bibr" rid="B24">1957</xref>
) or colors (Davidoff et al.,
<xref ref-type="bibr" rid="B7">1999</xref>
) to facial expressions (Etcoff and Magee,
<xref ref-type="bibr" rid="B13">1992</xref>
). A Bayesian explanation for category effects in speech perception was offered by Feldman et al. (
<xref ref-type="bibr" rid="B15">2009</xref>
). However, their solution only treats implicit predefined categories, not auxiliary contextual cues providing information about these categories, e.g., pre-cueing.</p>
<p>Another type of contextual information comes from the preceding occurrence of a stimulus in the form of prior experience. Bayesian probability theory has been successfully applied to a broad spectrum of studies exploring the effect of short or long-term experience on our current percept (Adams et al.,
<xref ref-type="bibr" rid="B1">2004</xref>
; Stocker and Simoncelli,
<xref ref-type="bibr" rid="B29">2006</xref>
; Verstynen and Sabes,
<xref ref-type="bibr" rid="B32">2011</xref>
). For human estimation of distances and turning angles in a production-reproduction task, we have recently shown that the effect of prior experience results in a varying bias depending on the underlying sample range (Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
). The participants’ behavior was best explained by an iterative Bayesian estimate derived from the current noisy measurement merged with information from short-term prior experience, which is updated on a trial by trial basis.</p>
<p>Sensory input is often embedded not just in the temporal context of prior experience, but occurs together with other indirect cues that provide a contextual environment helping to interpret the sensory input. These indirect or symbolic cues join together with sensory input and experience to yield a uniform percept. While there is a considerable body of research on multi-modal sensory fusion, the mechanisms of integration of symbolic cues into sensory perception are less well understood.</p>
<p>The present work aims to clarify the role of auxiliary contextual cues on behavior that is known to be influenced by prior experience. We extended our distance production-reproduction task (Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
) to include a symbolic cue that supplied additional, but initially uncertain information about the stimulus value. The symbolic cue values were provided as a written instruction prior to each trial and indicated whether the distance to be reproduced would be “short” or “long.” The cue values corresponded to two ranges of distances. We investigated whether (1) subjects could use such a symbolic cue that provided reliable but imprecise information about the sample distances and (2) how this abstract information influenced their estimation process. To evaluate the behavioral results in the cue condition we used two control conditions that mimicked the extreme cases of cue usage. In the first control condition, we presented participants with exactly the same distances in the same order, but without the symbolic cue. In the second control condition the “short” and “long” ranges of displacements were presented in a separate order. Thus, if subjects ignored the symbolic cue, we expected that the performance in the cue condition would resemble that of the first control condition. If subjects however separated their estimates based on the symbolic cue, the behavior should be similar to the second control condition.</p>
<p>We then compare the behavioral data to predictions of two distinct Bayesian observer models, the
<italic>categorical</italic>
and the
<italic>cue-combination model</italic>
, which are founded on qualitatively different assumptions about the causal relationship between the sensory stimulus and the symbolic cue and consequently, about how the mapping of the symbolic cue to the stimulus dimension is learned during the experiment. Both models are based on our previously published
<italic>basic iterative model</italic>
(Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
, see Figure
<xref ref-type="fig" rid="F1">1</xref>
A) and generate a combined estimate of the distance to be reproduced given the observed stimulus, the symbolic cue, and prior experience. In addition, in both models Kalman filters are used to dynamically update the prior experience and to learn the relation between sensory stimulus and symbolic cue.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Bayesian networks of the generative probabilistic models corresponding to the estimation part (i.e., dependence on previous trials not shown)</bold>
. The assumed probabilistic dependencies are shown as arrows.
<bold>(A)</bold>
Basic iterative model as described in Petzschner and Glasauer (
<xref ref-type="bibr" rid="B27">2011</xref>
). The stimulus
<italic>S</italic>
is a noisy measurement of the target distance
<italic>T</italic>
that is drawn from a single underlying category
<italic>A</italic>
.
<bold>(B)</bold>
Categorical model: the target distance
<italic>T</italic>
and the discrete symbolic cue
<italic>C</italic>
depend on the choice of the underlying category
<italic>A</italic>
. Again, the stimulus
<italic>S</italic>
is a noisy measurement of the target distance
<italic>T</italic>
.
<bold>(C)</bold>
Cue-combination model: The stimulus
<italic>S</italic>
and cue signal
<italic>C</italic>
<sub>mp</sub>
represent both independent noisy measurements of the target distance
<italic>T</italic>
that is drawn from a single underlying category
<italic>A</italic>
. The cue signal
<italic>C</italic>
<sub>mp</sub>
is mapped to the symbolic cue
<italic>C</italic>
. The striped background in
<bold>(A,C)</bold>
indicates that
<italic>T</italic>
is assumed to be drawn from a single category
<italic>A</italic>
in contrast to
<bold>(B)</bold>
were target distance and cue depend on the choice of the underlying category.</p>
</caption>
<graphic xlink:href="fnint-06-00058-g001"></graphic>
</fig>
<p>The two models differ in how the symbolic cue is merged with prior experience and sensory input into a distance estimate. This difference corresponds to different assumptions about the causal outside world structures between the stimulus, the measurement, and the symbolic cue (see Figure
<xref ref-type="fig" rid="F1">1</xref>
). In the
<italic>categorical model</italic>
, the idea is that the symbolic cue helps to identify an underlying stimulus category (Feldman et al.,
<xref ref-type="bibr" rid="B15">2009</xref>
). The model is based on the assumption that in the outside world, in each trial one of two categories is chosen, which determines the range of test distances. The test distance, which is drawn randomly from the respective category, leads to a noisy distance measurement. In addition, the symbolic cue signifies the chosen category with a certain reliability (Figure
<xref ref-type="fig" rid="F1">1</xref>
B). In the
<italic>cue-combination model</italic>
, it is assumed that the symbolic cue provides additional information similar to a sensory signal from a different modality (e.g., Ernst and Banks,
<xref ref-type="bibr" rid="B11">2002</xref>
). The cue-combination model has a different view on the outside world. As our previous basic iterative model (Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
), it assumes the test distances are drawn from a single range, instead of distinct categories. The chosen test distance leads to a noisy distance measurement and to a noisy cue signal, which determines the symbolic cue (Figure
<xref ref-type="fig" rid="F1">1</xref>
C).</p>
</sec>
<sec sec-type="materials|methods" id="s1">
<title>Materials and Methods</title>
<sec>
<title>Participants</title>
<p>Twenty volunteers (nine female) aged 20–29, who had all normal or corrected-to-normal vision and were naive to the purpose of the experiments, took part in the study. Participation was monetarily compensated. The experiments were approved by the local ethics committee and conducted in accordance with Declaration of Helsinki.</p>
</sec>
<sec>
<title>Experimental setup</title>
<p>Stimuli were viewed binocularly on a PnP monitor driven by an NVIDIA GeForce 8800 GTX graphics card at a frame rate of 60 Hz and with a monitor resolution of 1920 × 1200. All experiments were carried out in complete darkness except for the illumination by the monitor. The real-time virtual reality (VR) was created using Vizard 3.0 (Worldviz,
<uri xlink:type="simple" xlink:href="http://www.worldviz.com/">http://www.worldviz.com/</uri>
and depicted the same artificial stone desert as described in Petzschner and Glasauer (
<xref ref-type="bibr" rid="B27">2011</xref>
), consisting of a textured ground plane, 200 scattered stones that served as pictorial depth cues, and a textured sky (Figure
<xref ref-type="fig" rid="F2">2</xref>
). The orientation of the ground plane texture, the position of the stones, and the starting position of the participant within the VR were randomized in each trial to prevent participants from using landmark cues to calibrate their estimate of displacement. The sky was simulated as a 3D dome centered on the participant’s current position and thus the distance to the horizon was kept constant. In the VR each participant’s eye height was adjusted individually to his/her true eye height. A multi-directional movable joystick (SPEEDLINK) was used to change the position with a constant speed.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Schematic time course of a single trial</bold>
. Subjects had to subsequently produce and reproduce a sample distance in a virtual reality using the joystick to change their position with a constant speed. The final position was indicated via button press. In the IR-C condition each production-reproduction block was preceded by a symbolic cue that declared the upcoming sample displacement to be either “short” or “long.” No symbolic cue was displayed in the IR-NC and BR-NC conditions.</p>
</caption>
<graphic xlink:href="fnint-06-00058-g002"></graphic>
</fig>
</sec>
<sec>
<title>Experimental procedure</title>
<p>Subjects had to estimate traveled distances in a production-reproduction task in three different experimental conditions, “blocked-ranges, no cue” (BR-NC), “interleaved-ranges, no cue” (IR-NC), and “interleaved-ranges, cue” (IR-C). The task remained the same for all three conditions.</p>
<sec>
<title>Task</title>
<p>In each trial subjects were asked to “produce” a certain sample distance, by using a joystick to move forward through the virtual environment on a linear path toward the direction of a visual object at the horizon of the virtual world until they were automatically stopped for 2.25 s. During that time they received an instruction to subsequently “reproduce” the same amount of displacement that they had experienced during the production phase. Throughout the reproduction phase subjects continued moving in the same direction as in the production phase and indicated via button press when they thought they had covered the same distance as in the production phase. In the condition with cues the symbolic cue was presented before the production phase. Figure
<xref ref-type="fig" rid="F2">2</xref>
displays a schematic overview of the time course of events in a single trial. In all trials velocity was kept constant during one movement, but changed randomly up to ±60% (scaling factors between joystick output and constant VR velocity were drawn from a normal distribution) between production and reproduction phase to exclude time estimation strategies to solve the task.</p>
</sec>
<sec>
<title>Experimental conditions</title>
<p>Each experimental condition consisted of 110 trials. The first 10 trials per condition were training trials and served to familiarize participants with the task and VR. During these 10 trials, feedback on the performance was given after the reproduction phase by asking subjects to navigate toward an object that was displayed at the correct distance in the VR. The following 100 trials were test trials without any feedback. Only test trials were used for data analysis. After 50 trials subjects had a short break of 100 s to relax their hands. During that time the subjects did not leave their position and the room remained dark. Different experimental conditions were separated by a break for no less than 15 min outside the room of the experiment. In all three conditions the overall number of repetitions for each sample distance remained the same, thus the overall distribution of samples was the same for all three conditions. The same trial order within one condition as well as the same order of cues in the cued condition was maintained for all participants. The three experimental conditions were performed in a randomized order.</p>
<sec id="s2">
<title>“Blocked-ranges, no cue” condition</title>
<p>In the BR-NC condition the 100 test distances were drawn in two blocks from two different underlying uniform sample distributions referred to as “short” range ([5, 7, 9, 11, 13] m) and “long” range ([11, 13, 15, 17, 19] m). In the first block of 50 trials the sample distances were randomly drawn from the “short” range distribution; sample distances for the second block of 50 trials were randomly drawn from the “long” range distribution. The two blocks were separated by a short break of 100 s. Within each range each sample distance was repeated 10 times in a randomized order. Note that the 11 and 13-m distances appeared in both the “short” and “long” range distribution, and were thus repeated 20 times in the overall condition. Thus, we refer to these displacements as overlapping samples. Subjects received no additional information about the underlying sample distribution (Figure
<xref ref-type="fig" rid="F3">3</xref>
A).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Overview of the three experimental conditions</bold>
. Left: time course of one trial in the distance production-reproduction task. Middle: distribution and trial sequence for the blocked and interleaved-ranges. Right: Potential behavioral response.
<bold>(A)</bold>
BR-NC condition: the two sample ranges were tested in a blocked order. In the first half of the trials a range of “short” displacements was tested, in the second half of the condition a range of “long” distances was tested. Both ranges were overlapping for two distances (11 and 13 m)
<bold>(B)</bold>
IR-NC condition: The same displacements as in
<bold>(A)</bold>
where tested in the production-reproduction task, but in a interleaved order resulting in one non-uniform range of randomized sample displacements.
<bold>(C)</bold>
IR-C condition: displacements were tested in the exact same order as in
<bold>(B)</bold>
, but each trial started with a symbolic cue that indicated either a “short” or “long” displacement. No further information was provided. Depending on the influence of the symbolic cue the resulting behavior could range between the extreme cases mimicked in
<bold>(A,B)</bold>
.</p>
</caption>
<graphic xlink:href="fnint-06-00058-g003"></graphic>
</fig>
</sec>
<sec>
<title>“Interleaved-ranges, no cue” condition</title>
<p>In the IR-NC condition the same sample distances of the two distributions were tested as in the BR-NC condition, however in a interleaved order resulting in one randomized, non-uniform sample distribution [5, 7, 9, 11, 13, 15, 17, 19] m. All samples were repeated 10 times during the overall condition, except the 11 and 13-m distance, which were again repeated 20 times. As above subjects received no additional information about the underlying sample distribution (Figure
<xref ref-type="fig" rid="F3">3</xref>
B).</p>
</sec>
<sec>
<title>“Interleaved-ranges, cue” condition</title>
<p>In the IR-C condition sample distances were tested in the exact same order as in the IR-NC condition based on one non-uniform sample distribution (Figure
<xref ref-type="fig" rid="F3">3</xref>
C). However this time subjects were told that there are two different types of samples referred to as “short” and “long” distances and that, in order to improve their performance, they would receive a written, symbolic cue that indicated which type the upcoming distance would belong to. No further information on the meaning of “short” and “long” was provided. At the beginning of each trial the sample distance was assigned on the screen to belong to one of the two types (“The next test distance will be short” or “The next test distance will be long”). All distances ranging from 5 to 9 m and one half of the 11 and 13-m distance samples were announced as being “short,” all distances ranging from 15 to 19 m and the other half of the 11 and 13-m distances were announced as being “long.” Thus the symbolic cue was always valid, except for distances 11 and 13 m, where the same distance could either be referred to as “short” or “long.” Consequently, the separation provided by the symbolic cue is comparable to the two temporally separate ranges in the BR-NC condition.</p>
</sec>
</sec>
</sec>
<sec>
<title>Data analysis</title>
<p>Participants’ position and orientation within the VR were sampled at 20 Hz. The reproduced displacement was calculated as the difference between the position at the time of the button press and the produced displacement.</p>
<p>To test for differences in the behavior that are due to the use of the underlying sample range or the written symbolic cue, trials in all three conditions were split into two groups, the ranges “short” and “long.” For the BR-NC condition, where the two distributions were tested consecutively, this was achieved by splitting the trials into two halves (“short”: trials 1–50; “long”: trials 51–100). In both the IR-NC and IR-C condition trials were split according to the symbolic cue (“short” and “long”) given in the IR-C condition. Note that we also split the IR-NC condition in order to provide a direct comparison of the same trials with and without symbolic cue.</p>
<p>Differences in the behavioral data for the two ranges can be easily examined by comparing across those displacements that were tested in both ranges (11 and 13 m). Thus we refer to the comparison of 11 and 13 m between the “short” and “long” range as “overlapping samples comparison.”</p>
<p>Data analysis was conducted in MATLAB R2010b (MathWorks). Statistical differences were assessed using repeated-measures analysis of variance (rm-ANOVA). A probability level of
<italic>p</italic>
 < 0.05 was considered significant for all statistical analysis. To assess differences between conditions and ranges we used rm-ANOVA for the “overlapping samples comparison” with the within-subjects factors
<italic>condition</italic>
(BR-NC, IR-NC, IR-C),
<italic>range</italic>
(“short” vs. “long”) and
<italic>distance</italic>
(two distances, 11 and 13 m). Since the use of the symbolic cue should have an effect not just on the “overlapping samples,” but also on the whole set of presented distances, we tested the difference between conditions by a second rm-ANOVA for the mean reproduction error with the within-subject factors
<italic>condition</italic>
(BR-NC, IR-NC, IR-C) and
<italic>distance</italic>
(10 distances, see
<xref ref-type="sec" rid="s2">“Blocked-Ranges, No Cue” Condition</xref>
).</p>
</sec>
<sec>
<title>Modeling</title>
<p>In our previous study we proposed a model of iterative Bayesian estimation that explained subjects performance in a distance production-reproduction task by the incorporation of prior experience into the estimation process (Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
). This basic iterative model is applied to explain the data for the two conditions without symbolic cue (BR-NC and IR-NC) in the present work (Figure
<xref ref-type="fig" rid="F1">1</xref>
A). For the symbolic cue condition (IR-C) the model must be extended to incorporate information that is not only driven by prior experience but the symbolic cue itself. Important for such an extension is the interpretation of the symbolic cue. Neither the symbolic cue itself nor the experimental instruction specified (1) the value or range of values in the stimulus dimension it corresponds to, and (2) the proportion of trials in which the symbolic cue is actually valid.</p>
<p>As mentioned in the Introduction, we propose two qualitatively different ideas how the symbolic cue could be interpreted, how the mapping of the symbolic cue to the stimulus dimension is learned, and how it is finally integrated into the estimation process. The first interpretation, referred to as categorical model, assumes that the symbolic cue
<italic>C</italic>
is an indicator for a category
<italic>A</italic>
that determines the distribution from which the target distance
<italic>T</italic>
, that is the distance to be reproduced, is being drawn (Figure
<xref ref-type="fig" rid="F1">1</xref>
B). This interpretation corresponds largely to the categorical model proposed by Feldman et al. (
<xref ref-type="bibr" rid="B15">2009</xref>
), except that in their model there is no symbolic cue provided to the observer. The second interpretation, referred to as cue-combination model, assumes that the target
<italic>T</italic>
is drawn from one single distribution and the symbolic cue
<italic>C</italic>
provides additional evidence about
<italic>T</italic>
just like a sensory cue from another modality (Figure
<xref ref-type="fig" rid="F1">1</xref>
C). Thus, this second interpretation leads to a multi-modal fusion model in which one sensory input
<italic>S</italic>
, the stimulus measurement, is continuous and the other sensory input
<italic>C</italic>
, the symbolic cue, is discrete.</p>
<p>In the following, the two models are described in detail. Each model has three free parameters, which are explained in the respective section. We first describe the estimation part that fuses sensory measurement, symbolic cue, and prior experience. We then separately describe the update part that implements a discrete Kalman Filter as iterative Bayesian algorithm to update cue-related priors (categorical model) or calibrate likelihoods (cue-combination model).</p>
<p>The estimation part of the two models is also illustrated in Figure
<xref ref-type="fig" rid="F4">4</xref>
by displaying how the prior information, the symbolic cue, and the sensory likelihood function are transformed into a posterior distribution, which determines the reproduced distance.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Schematic illustration of the Bayesian fusion in the symbolic cue models</bold>
. Only the estimation step is shown, which does not include updating based on information from previous trials.
<bold>(A)</bold>
Categorical model: the category priors are merged after weighting each prior with the conditional probability of the respective category given the sensory input and the symbolic cue. Then the resulting Gaussian mixture distribution (combined prior) is fused with the stimulus measurement to derive the posterior.
<bold>(B)</bold>
Cue-combination model: first the stimulus likelihood and the likelihood corresponding to the current symbolic cue signal are fused. Then this fused signal (cue + stimulus) is combined with the prior, yielding the posterior.</p>
</caption>
<graphic xlink:href="fnint-06-00058-g004"></graphic>
</fig>
<p>We use a mathematical notation where we refer to random variables with upper case letters (e.g.,
<italic>A</italic>
,
<italic>S</italic>
,
<italic>T</italic>
,
<italic>C</italic>
), to values for discrete variables such as cue and category with indexed lower case letters (e.g.,
<italic>c
<sub>i</sub>
</italic>
), and to values for continuous variables such as the sensory input with lower case letters (e.g.,
<italic>s</italic>
). Furthermore, we abbreviate notations such as
<italic>P</italic>
(
<italic>T</italic>
,
<italic>A</italic>
 = 
<italic>a
<sub>i</sub>
</italic>
 | 
<italic>S</italic>
,
<italic>C</italic>
) to
<italic>P</italic>
(
<italic>T</italic>
,
<italic>a
<sub>i</sub>
</italic>
 | 
<italic>S</italic>
,
<italic>C</italic>
).</p>
<sec>
<title>Categorical model</title>
<p>The categorical model follows Feldman et al. (
<xref ref-type="bibr" rid="B15">2009</xref>
) for the definition of the distributions. We assume that the target distance
<italic>T</italic>
is drawn from a normally distributed category</p>
<disp-formula id="E1">
<label>(1)</label>
<mml:math id="M28">
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>N</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>and that categories
<italic>A</italic>
 = 
<italic>a
<sub>i</sub>
</italic>
have individual means
<inline-formula>
<mml:math id="M1">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
but share the same variance
<inline-formula>
<mml:math id="M2">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
Our generative model assumes that categories
<italic>a
<sub>i</sub>
</italic>
themselves are drawn uniformly from one of
<italic>n</italic>
possible categories (
<italic>n</italic>
 = 2 in the present experiment, see Figure
<xref ref-type="fig" rid="F4">4</xref>
top left). Due to measurement noise,
<italic>T</italic>
cannot be sensed directly, but only the noisy measurement
<italic>S</italic>
with the conditional Gaussian distribution</p>
<disp-formula id="E2">
<label>(2)</label>
<mml:math id="M29">
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>N</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>In addition to the direct stimulus measurement
<italic>S</italic>
, participants are presented with a symbolic cue value
<italic>c
<sub>j</sub>
</italic>
, which provides information about the underlying category. Nevertheless there is some uncertainty associated with the symbolic cue. Accordingly the cue reliability, that is, the probability of the correct symbolic cue value being presented, given a certain category
<italic>a
<sub>j</sub>
</italic>
, is specified as
<italic>p</italic>
<sub>C</sub>
 = 
<italic>P</italic>
(
<italic>c
<sub>j</sub>
</italic>
 | 
<italic>a
<sub>j</sub>
</italic>
) and assumed to be constant over trials. Accordingly the probability of being presented with a wrong symbolic cue out of
<italic>n−</italic>
1 remaining cues, is</p>
<disp-formula id="E3">
<label>(3)</label>
<mml:math id="M30">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>To reproduce the target distance
<italic>T</italic>
, we are interested in the posterior distribution
<italic>P</italic>
(
<italic>T</italic>
 | 
<italic>S</italic>
,
<italic>C</italic>
). To infer this posterior distribution, we first calculate the probability
<italic>P</italic>
(
<italic>T</italic>
,
<italic>A</italic>
 | 
<italic>S</italic>
,
<italic>C</italic>
), which can be derived by applying Bayes’ law to the complete joint distribution
<italic>P</italic>
(
<italic>T</italic>
,
<italic>A</italic>
,
<italic>S</italic>
,
<italic>C</italic>
), and then marginalize over the category
<italic>A</italic>
:</p>
<disp-formula id="E4">
<label>(4)</label>
<mml:math id="M31">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>We show in the Appendix that, with the conditional dependency assumptions for this model (see Figure
<xref ref-type="fig" rid="F1">1</xref>
B), we can rewrite the posterior as</p>
<disp-formula id="E5">
<label>(5)</label>
<mml:math id="M32">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>The category-dependent posteriors
<italic>P</italic>
(
<italic>T</italic>
 | 
<italic>S</italic>
,
<italic>a
<sub>i</sub>
</italic>
), which now are independent of the symbolic cue
<italic>C</italic>
, are weighted by the posterior probabilities
<italic>P</italic>
(
<italic>a
<sub>i</sub>
</italic>
 | 
<italic>S</italic>
,
<italic>C</italic>
) of the categories given stimulus
<italic>S</italic>
and symbolic cue
<italic>C</italic>
.</p>
<p>To infer the target distance we compute the mean of the posterior
<italic>P</italic>
(
<italic>T</italic>
 | 
<italic>S</italic>
,
<italic>C</italic>
). Analogous to the equation above, the mean of the posterior can be computed as weighted sum of conditional expectations of the category-dependent posteriors, where the weights are again the posteriors of the categories.</p>
<disp-formula id="E6">
<label>(6)</label>
<mml:math id="M33">
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>We show in the Appendix that this can be reformulated as</p>
<disp-formula id="E7">
<label>(7)</label>
<mml:math id="M34">
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mspace width="2.77695pt" class="tmspace"></mml:mspace>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>That is, a weighted sum of the category means
<inline-formula>
<mml:math id="M3">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
forms the mean of a Gaussian mixture distribution (see Figure
<xref ref-type="fig" rid="F4">4</xref>
A middle), and this mean is summed with measurement
<italic>s</italic>
weighted by
<italic>w</italic>
<sub>m</sub>
. The measurement weight
<italic>w</italic>
<sub>m</sub>
is determined by the measurement and category variances:</p>
<disp-formula id="E8">
<label>(8)</label>
<mml:math id="M35">
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Thus,
<italic>w</italic>
<sub>m</sub>
is solely determined by the ratio
<inline-formula>
<mml:math id="M4">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
, which is one of the free parameters of the model. In the Appendix we show that the posteriors of the categories can be rewritten to</p>
<disp-formula id="E9">
<label>(9)</label>
<mml:math id="M36">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>and thus depend on cue reliability and a measurement-dependent factor α
<italic>
<sub>i,j</sub>
</italic>
(
<italic>s</italic>
):</p>
<disp-formula id="E10">
<label>(10)</label>
<mml:math id="M37">
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Here we exploit the specific form of the cue reliability and assume the categories to be uniformly distributed. The marginalization over
<italic>T</italic>
results in a normal distribution
<italic>P</italic>
(
<italic>S</italic>
 | 
<italic>A</italic>
) with</p>
<disp-formula id="E11">
<label>(11)</label>
<mml:math id="M38">
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>N</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Applying the assumption for the cue reliability to the posterior expectation, we finally have</p>
<disp-formula id="E12">
<mml:math id="M39">
<mml:mtable class="eqnarray" columnalign="left">
<mml:mtr>
<mml:mtd class="eqnarray-1">
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd class="eqnarray-2">
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd class="eqnarray-3"></mml:mtd>
<mml:mtd class="eqnarray-4">
<mml:mtext class="eqnarray"></mml:mtext>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="eqnarray-1"></mml:mtd>
<mml:mtd class="eqnarray-2">
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mo class="MathClass-bin">×</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:mtd>
<mml:mtd class="eqnarray-3"></mml:mtd>
<mml:mtd class="eqnarray-4">
<mml:mtext class="eqnarray">(12)</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>The term within the large brackets is composed of the mean of the correct category weighted by the cue reliability and the weighted sum of all other category means.</p>
<p>The effect of this weighting is to select or suppress the correct category, depending on the cue reliability parameter
<italic>p</italic>
<sub>C</sub>
. The latter would correspond to a deliberately misleading symbolic cue. Furthermore, the influence of the symbolic cue is balanced by the probability of the measurement depending on the category, which appears in α
<italic>
<sub>i,j</sub>
</italic>
(
<italic>s</italic>
).</p>
<p>In Feldman et al. (
<xref ref-type="bibr" rid="B15">2009</xref>
), the symbolic cue indicating the category is not provided, which corresponds to an uninformative symbolic cue. We can reflect this in our model by setting
<italic>P</italic>
(
<italic>c
<sub>j</sub>
</italic>
 | 
<italic>a
<sub>i</sub>
</italic>
) = 1/
<italic>n</italic>
for any
<italic>i</italic>
,
<italic>j</italic>
. We show in the Appendix that this indeed removes the dependency of the category posterior on the symbolic cue, yielding</p>
<disp-formula id="E13">
<label>(13)</label>
<mml:math id="M40">
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>This corresponds to Eqs
<xref ref-type="disp-formula" rid="E10">10</xref>
and
<xref ref-type="disp-formula" rid="E11">11</xref>
in Feldman et al. (
<xref ref-type="bibr" rid="B15">2009</xref>
) for equal category variance.</p>
<p>The posterior of
<italic>T</italic>
is a Gaussian mixture distribution, whose mean is not necessarily equal to its mode. However, the Gaussian measurement likelihood typically dominates the posterior, because its variance is small compared to the combined variance of the prior distributions corresponding to the categories. This yields a near Gaussian posterior as illustrated in Figure
<xref ref-type="fig" rid="F4">4</xref>
.</p>
</sec>
<sec>
<title>Cue-combination model</title>
<p>Instead of assuming that the symbolic cue signifies a category of sensory stimuli, it can also be conceived as providing additional information about the location of the stimulus in the sensory dimension. Under this assumption, the target distance
<italic>T</italic>
is drawn from a single distribution</p>
<disp-formula id="E14">
<label>(14)</label>
<mml:math id="M41">
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>N</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>with the stimulus
<italic>S</italic>
being a noisy reading of
<italic>T</italic>
</p>
<disp-formula id="E15">
<label>(15)</label>
<mml:math id="M42">
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>N</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>The intuition behind the cue-combination model is that the same mechanism of multi-modal sensory fusion (e.g., Ernst and Banks,
<xref ref-type="bibr" rid="B11">2002</xref>
), which the brain might use to combine different sensory modalities, is used to merge sensory and symbolic information. From an observer point of view, this requires an inference mechanism that maps the symbolic cue
<italic>C</italic>
to a continuous cue signal
<italic>C</italic>
<sub>mp</sub>
. We call this signal the mapped cue. This signal is then merged with the sensory signal
<italic>S</italic>
and prior
<italic>T</italic>
in the usual Bayesian fashion. From a generative point of view, this inference inverts the causal relationships assumed for the outside world (see Figure
<xref ref-type="fig" rid="F1">1</xref>
C). In particular,
<italic>C</italic>
<sub>mp</sub>
is discretized by a step function to yield
<italic>C</italic>
. Our update mechanism, described further below, learns to map each cue value
<italic>c
<sub>i</sub>
</italic>
to a cue signal value
<italic>c</italic>
<sub>mp</sub>
that falls into the corresponding range. This corresponds to learning the thresholds of the step function. This mapping is deterministic, thus the cue signal becomes a known quantity, similar to actual observations. We can therefore derive the estimation step using
<italic>C</italic>
<sub>mp</sub>
only, leaving out
<italic>C</italic>
.</p>
<p>The cue signal
<italic>C</italic>
<sub>mp</sub>
has a likelihood function that corresponds to the average location and dispersion associated with the symbolic cue (see Figure
<xref ref-type="fig" rid="F4">4</xref>
B)</p>
<disp-formula id="E16">
<label>(16)</label>
<mml:math id="M43">
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>N</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Note that
<italic>C</italic>
<sub>mp</sub>
depends on
<italic>T</italic>
in a more complex way than
<italic>S</italic>
, reflected by the non-linear mapping μ
<sub>C</sub>
(
<italic>T</italic>
) We treat the cue signal
<italic>C</italic>
<sub>mp</sub>
the same way as the observation
<italic>S</italic>
. The mapping of the symbolic cue to the cue signal depends on the value of
<italic>C</italic>
and is updated iteratively. This updating can be understood as learning or calibration of the symbolic cue values (see
<xref ref-type="sec" rid="s3">Iterative update</xref>
).</p>
<p>The optimal estimate of the target distance
<italic>T</italic>
is provided by a sensory fusion of the stimulus, the cue signal, and the prior</p>
<disp-formula id="E17">
<label>(17)</label>
<mml:math id="M44">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>With
<italic>w</italic>
<sub>m</sub>
as weight for the measurement
<italic>s</italic>
and
<italic>w</italic>
<sub>fu</sub>
as weight for the fused signal composed of mapped cue
<italic>c</italic>
<sub>mp</sub>
and measurement
<italic>s</italic>
, the mean for the posterior is computed as follows:</p>
<disp-formula id="E18">
<label>(18)</label>
<mml:math id="M45">
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>The weights
<italic>w</italic>
<sub>fu</sub>
and
<italic>w</italic>
<sub>m</sub>
result from the variances of target, stimulus, and symbolic cue:</p>
<disp-formula id="E19">
<label>(19)</label>
<mml:math id="M46">
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>The combined variance
<inline-formula>
<mml:math id="M78">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>CS</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
of symbolic cue and stimulus is</p>
<disp-formula id="E20">
<label>(20)</label>
<mml:math id="M47">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Note that in the indicies we wrote
<italic>C</italic>
instead of
<italic>C</italic>
<sub>mp</sub>
for brevity. A more detailed derivation of the expectation of the posterior is provided in the Appendix. In short, since prior, combined likelihood, and their product are Gaussians, the mean of the posterior is given by a weighted sum of prior mean and the weighted sum of mapped cue and measurement (see Figure
<xref ref-type="fig" rid="F4">4</xref>
right).</p>
</sec>
<sec id="s3">
<title>Iterative update</title>
<p>Prior experience as well as cue mapping are not available at the start of the experiment but need to be acquired and updated over the course of the trials. Such updating on a trial by trial basis can be achieved by a discrete Kalman filter updating internal states at each time step. In our case, the states correspond to the means of the two categories in case of the categorical model, to the means of the two symbolic cue likelihoods for the cue-combination model, and to the distance prior in case of the previously published basic iterative model (see Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
).</p>
<p>In both models, the symbolic cue is used to decide which category mean will be updated or which symbolic cue likelihood will be learned. The updating of the category means is an extension of our
<italic>basic iterative model</italic>
from one single category to multiple categories (see also Feldman et al.,
<xref ref-type="bibr" rid="B15">2009</xref>
). The iterative updating of the mean of the symbolic cue likelihood can be interpreted as learning the non-linear mapping of the symbolic cue to the stimulus dimension or as calibration of the symbolic cue in terms of a distance.</p>
<p>For Gaussian noise and linear dynamics, the Kalman filter yields an estimate of the current state. The current state is estimated based on the current observation and the estimate of the state at the previous time step, taking into account a deterministic temporal evolution of the state. The state
<italic>x</italic>
to be updated and the current measurement
<italic>y</italic>
at trial
<italic>i</italic>
are described by the system equations</p>
<disp-formula id="E21">
<label>(21)</label>
<mml:math id="M48">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>The random variables
<italic>n
<sub>q</sub>
</italic>
and
<italic>n
<sub>r</sub>
</italic>
represent the process and measurement noise, which are assumed to be independent with Gaussian probability distributions
<italic>P</italic>
(
<italic>n
<sub>q</sub>
</italic>
) ≈ 
<italic>N</italic>
(0,
<italic>q</italic>
) and
<italic>P</italic>
(
<italic>n
<sub>r</sub>
</italic>
) ≈ 
<italic>N</italic>
(0,
<italic>r</italic>
). The temporal evolution of the state
<italic>x</italic>
defined by these equations can be seen as a random walk governed by the process noise. The measurement
<italic>y</italic>
is a noisy version of
<italic>x</italic>
.</p>
<p>For such a simple system, it can be shown that the difference equation system of the Kalman filter reduces to</p>
<disp-formula id="E22">
<label>(22)</label>
<mml:math id="M49">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mi>q</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mi>r</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo class="MathClass-op">^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo class="MathClass-op">^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>with
<italic>k
<sub>i</sub>
</italic>
being the Kalman gain,
<inline-formula>
<mml:math id="M5">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo class="MathClass-op">^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo class="MathClass-op">^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
being the
<italic>a priori</italic>
and
<italic>a posteriori</italic>
estimate of the state (e.g., a category mean) at trial
<italic>i</italic>
, and
<italic>p</italic>
<sub>
<italic>i</italic>
−1</sub>
the corresponding variance of that quantity. Note that it is evident from this equation that the Kalman gain
<italic>k
<sub>i</sub>
</italic>
can be interpreted as weight of the measurement depending on measurement noise and the assumed random change of the estimated quantity, such as a category mean. The new estimate is thus a weighted sum of the previous estimate and the current measurement.</p>
<p>The update for the categorical model employs a Kalman filter for each category mean to be estimated, yielding equations indexed by
<italic>j</italic>
:</p>
<disp-formula id="E23">
<label>(23)</label>
<mml:math id="M50">
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>For two categories we consequently have two Kalman filters, one for each category mean. The variances
<inline-formula>
<mml:math id="M6">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="M7">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
correspond to quantities
<italic>p
<sub>i</sub>
</italic>
and
<italic>r</italic>
, respectively. Note that the ratio of the two variances only depends on the ratio
<italic>q</italic>
/
<italic>r</italic>
, which is one of the free model parameters.</p>
<p>The cue-combination model uses three Kalman filters to calibrate the two symbolic cue likelihoods and to update the prior for the target distance
<italic>T</italic>
using the same general form of update equations as described above.</p>
<disp-formula id="E24">
<mml:math id="M51">
<mml:mtable class="eqnarray" columnalign="left">
<mml:mtr>
<mml:mtd class="eqnarray-1">
<mml:msubsup>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mtd>
<mml:mtd class="eqnarray-2">
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
<mml:mtd class="eqnarray-3"></mml:mtd>
<mml:mtd class="eqnarray-4">
<mml:mtext class="eqnarray">(24)</mml:mtext>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="eqnarray-1">
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
<mml:mtd class="eqnarray-2">
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:mtd>
<mml:mtd class="eqnarray-3"></mml:mtd>
<mml:mtd class="eqnarray-4">
<mml:mtext class="eqnarray">(25)</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>The calibration of the symbolic cue likelihoods yields the mapped cues used in the estimation.</p>
</sec>
<sec>
<title>Logarithmic stimulus representation</title>
<p>There is some indication that magnitudes are internally represented in the brain on a log-scale (Fechner,
<xref ref-type="bibr" rid="B14">1860</xref>
; Dehaene,
<xref ref-type="bibr" rid="B8">2003</xref>
; Jürgens and Becker,
<xref ref-type="bibr" rid="B20">2006</xref>
; Stocker and Simoncelli,
<xref ref-type="bibr" rid="B29">2006</xref>
; Durgin et al.,
<xref ref-type="bibr" rid="B9">2009</xref>
). In Petzschner and Glasauer (
<xref ref-type="bibr" rid="B27">2011</xref>
) we showed that defining a Bayes-optimal observer on log-scales leads to an elegant combination of Steven’s power law with the Weber–Fechner law (Fechner,
<xref ref-type="bibr" rid="B14">1860</xref>
; Stevens,
<xref ref-type="bibr" rid="B28">1961</xref>
). The estimates in our models in the present work are again computed based on simplified logarithmic representations of the presented stimuli. In conjunction with that stands an additional parameter that can represent different optimal decision strategies in subjects. We shortly recap the idea here and refer to Petzschner and Glasauer (
<xref ref-type="bibr" rid="B27">2011</xref>
) for a detailed treatment. The logarithmic representation is given as</p>
<disp-formula id="E25">
<label>(26)</label>
<mml:math id="M52">
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>l</mml:mi>
<mml:mi>n</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>The internal representation of the measurement
<italic>s</italic>
is computed as the natural logarithm of the measurement on linear scales,
<italic>d</italic>
<sub>m</sub>
. In the present work,
<italic>d</italic>
<sub>m</sub>
is given in virtual meters. To achieve a unit-less representation,
<italic>d</italic>
<sub>m</sub>
is normalized with the small constant
<italic>d</italic>
<sub>0</sub>
 ≪ 1. The random variable
<italic>n</italic>
<sub>m</sub>
represents the normally distributed measurement noise
<inline-formula>
<mml:math id="M8">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo class="MathClass-open">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo class="MathClass-close">)</mml:mo>
</mml:mrow>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo class="MathClass-open">(</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo class="MathClass-close">)</mml:mo>
</mml:mrow>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
<p>The estimate
<italic>x</italic>
<sub>est</sub>
, corresponding to
<italic>E</italic>
[
<italic>T</italic>
 | 
<italic>s</italic>
,
<italic>c
<sub>j</sub>
</italic>
] for the categorical model and
<italic>E</italic>
[
<italic>T</italic>
 | 
<italic>s</italic>
,
<italic>c</italic>
<sub>mp</sub>
] for the cue-combination model, is a log-scale value. It is transformed back to a linear scale with</p>
<disp-formula id="E26">
<label>(27)</label>
<mml:math id="M53">
<mml:msub>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>est</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mi>Δ</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>The result is the linear scale reproduction
<italic>d
<sub>r</sub>
</italic>
in virtual meters. We assume here that, apart from this transformation and possibly additional noise, the reproduction in subjects corresponds to the estimate.</p>
<p>The value Δ
<italic>x</italic>
accounts for different decision strategies of the subjects. A decision strategy collapses the posterior distribution into a single value, the estimate, which is optimal in the sense that it minimizes the expected loss due to the deviation from the real value (the real distance in our case). Typical decision strategies use the mean, median, or mode of a distribution as optimal (loss-minimal) estimate, which correspond to three typical loss functions (Körding and Wolpert,
<xref ref-type="bibr" rid="B22">2004</xref>
). While these values are equal for normal distributions, they are different in our case, since the normal distribution transfers into a log-normal distribution after back-transformation. For the log-normal distribution mean, median, and mode differ by a linear shift of
<italic>x</italic>
<sub>est</sub>
. Therefore, by introducing an additional parameter Δ
<italic>x</italic>
in our models, we account for different types of loss functions. We call this parameter the shift term.</p>
</sec>
<sec>
<title>Model fit</title>
<p>To analyze how well our models explain the experimental results, we fitted their free parameters such that the difference between model output and subject responses was minimized.</p>
<p>The free parameters in the categorical model are the cue reliability
<italic>p</italic>
<sub>C</sub>
, the ratio
<inline-formula>
<mml:math id="M9">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
of the noise in the target distances and the measurement noise, and the shift term Δ
<italic>x</italic>
reflecting the loss function of the Bayesian estimator. The ratio
<inline-formula>
<mml:math id="M10">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
determines the weight of the measurement
<italic>w</italic>
<sub>m</sub>
relative to the category priors. This weighting schema reflects how subjects may put more weight on whichever quantity has less variance.</p>
<p>The free parameters of the cue-combination model are the shift term Δ
<italic>x</italic>
and two ratios. The first is the ratio of target distance noise to the combined noise in measurement and continuous cue signal,
<inline-formula>
<mml:math id="M11">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
The second is the ratio of the noise in the cue signal to the measurement noise,
<inline-formula>
<mml:math id="M12">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
Analogous to the categorical model, these ratios determine the relative weights
<italic>w</italic>
<sub>fu</sub>
and
<italic>w</italic>
<sub>m</sub>
, respectively. The first is the weight of the combined measurement and cue signal relative to the prior, the second the measurement weight relative to the cue signal.</p>
<p>The basic iterative model has two free parameters, the shift term Δ
<italic>x</italic>
and the ratio of target distance noise to measurement noise,
<inline-formula>
<mml:math id="M13">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
This ratio determines the measurement weight
<italic>w</italic>
<sub>m</sub>
of this model.</p>
<p>For the IR-C condition we fitted the category and cue-combination models to the responses of each single subject. That is, for each subject two sets of parameters were generated, corresponding to the two models. For the other two conditions, our models reduce to the iterative Bayesian estimation model (Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
), which we fitted in these cases. All models were fitted by minimizing the squared differences of model output and subject response in each trial using the Matlab function
<italic>lsqnonlin</italic>
.</p>
<p>The correct order of sample displacements over all trials in one condition was used as input to the models. Kalman filters in the models were initialized with the first observation, that is the first produced distance of the subject in the given condition.</p>
<p>To assess the precision of the fitted parameters, we estimated 95% confidence intervals of all parameters that were determined from the Jacobian of the parameter surface at the minimum using the Matlab function
<italic>nlparci</italic>
.</p>
</sec>
<sec>
<title>Model comparison</title>
<p>We compared the models’ goodness of fit by comparing their coefficients of determination
<italic>R</italic>
<sup>2</sup>
. The coefficient of determination assesses the proportion of variability in the mean data that is accounted for by the respective model. To test for a significant difference in the
<italic>R</italic>
<sup>2</sup>
of the two model fits across subjects we used the non-parametric Wilcoxon signed rank test (Matlab procedure
<italic>signrank</italic>
).</p>
</sec>
</sec>
</sec>
<sec>
<title>Results</title>
<sec>
<title>Behavioral data</title>
<p>In order to test the effect of an additional symbolic cue on the estimation of distances we used three experimental conditions. One condition tested the cue influence directly (IR-C condition) while the other two served as reference conditions for the extreme cases of the cue effect, i.e., ignoring the cue (IR-NC) or using the symbolic cue as perfectly reliable indicator for the stimulus range (BR-NC). The average results of all three conditions are presented in Figure
<xref ref-type="fig" rid="F5">5</xref>
(left side).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Group mean of all subjects (left) and respective model predictions (right)</bold>
. The group mean corresponds to the mean taken over all subjects for the whole trial sequence. Models were accordingly fitted to the resulting “mean trial sequence.” The rows
<bold>(A–C)</bold>
show the results for the three conditions BR-NC, IR-NC, and IR-C respectively. The IR-NC and BR-NC were fitted with the basic iterative model introduced in Petzschner and Glasauer (
<xref ref-type="bibr" rid="B27">2011</xref>
). Predictions for the cue condition IR-C were generated with the categorical as well as cue-combination model. Error bars depict the standard deviation of the reproduced distances across trials.</p>
</caption>
<graphic xlink:href="fnint-06-00058-g005"></graphic>
</fig>
<p>Differences between conditions can be assessed by comparing the estimation of overlapping samples, that is, displacements that were assigned to the “short” as well as to the “long” distribution. However, assigning distances to a short or long range should not only affect the overlapping distances, but the estimation and consequently the reproduction errors for
<italic>all</italic>
distances presented. Condition-dependent differences in distance reproduction should occur either due to the influence of short-term prior experience or induced by the symbolic cue.</p>
<sec>
<title>Comparison of distance errors</title>
<p>The comparison of the distance reproduction error shows a main effect of
<italic>distance</italic>
[
<italic>F</italic>
(9,38) = 136.2,
<italic>p</italic>
 < 0.0001] together with a highly significant interaction of
<italic>condition</italic>
and
<italic>distance</italic>
[
<italic>F</italic>
(18,342) = 3.45,
<italic>p</italic>
 < 0.0001]. This interaction is due to a clear separation of error patterns between conditions, which can be seen in Figure
<xref ref-type="fig" rid="F6">6</xref>
where the differences between errors in the interleaved condition (IR-NC) to the other two conditions are shown. Note that in both conditions where the ranges were separated either temporally (BR-NC) or by the symbolic cue (IR-C), the errors in the low range correspond on average to overshoots, while the errors in the high range correspond to undershoots with respect to those in the interleaved condition without cue (IR-NC). This correspondence of error patterns also confirms that the symbolic cue causes changes in distance estimation analogous to those found during temporal dissociation of the two ranges. However the effect in the IR-C condition is not as strong as in the BR-NC condition. Separate
<italic>post hoc</italic>
rm-ANOVAS with only two conditions shows that for IR-C versus BR-NC this interaction vanishes [
<italic>F</italic>
(9,171) = 1.82,
<italic>p</italic>
 = 0.068 n.s.], while it remains highly significant for IR-C and IR-NC [
<italic>F</italic>
(9,171) = 3.36,
<italic>p</italic>
 = 0.0008]. Thus, while in IR-C and IR-NC all distance stimuli were the same in magnitude and order, the reproduced distances are clearly different, which shows that the symbolic cue was used by the subjects in a way very similar to exploiting the temporal separation of the two ranges in the BR-NC condition.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Mean behavioral differences between conditions</bold>
.
<bold>(A)</bold>
Difference between the IR-NC and the BR-NC conditions for the mean reproduced distances.
<bold>(B)</bold>
Difference between the IR-NC and the IR-C conditions for mean reproduced distances. The only difference between the two conditions was the symbolic cue. Colors code the “short” and “long” range of displacements respectively.</p>
</caption>
<graphic xlink:href="fnint-06-00058-g006"></graphic>
</fig>
</sec>
<sec>
<title>Overlapping samples</title>
<p>The results for the whole range of distances are also supported by the overlapping samples comparison, which reveals a significant interaction of
<italic>condition</italic>
 × 
<italic>range</italic>
(short/long) for all experimental conditions [
<italic>F</italic>
(2,38) = 11.9,
<italic>p</italic>
 = 0.0001]. This implies a significant difference in the estimation of the two overlapping distances depending on the experimental condition (see also, Figure
<xref ref-type="fig" rid="F5">5</xref>
).</p>
<p>Separate ANOVAS with only two conditions revealed the individual relationships between the conditions. Differences in subjects’ behavior based solely on temporal order were determined based on the comparison of the IR-NC and BR-NC conditions (for a detailed description, see
<xref ref-type="sec" rid="s1">Materials and Methods</xref>
). In analogy with previous results, we find a significant interaction of
<italic>condition</italic>
 × 
<italic>range</italic>
in the overlapping samples comparison [
<italic>F</italic>
(1,19) = 26.5,
<italic>p</italic>
 < 0.001], which confirms that temporal order affects distance reproduction. By testing for the interaction between the IR-C and IR-NC condition, we assessed exclusively cue-based differences in subjects’ behavior. Again we find a significant
<italic>condition</italic>
 × 
<italic>range</italic>
interaction for the overlapping samples comparison [
<italic>F</italic>
(1,19) = 8.8,
<italic>p</italic>
 < 0.01]. To compare performance when the sample ranges were either separated by time or symbolic cue, we performed an rm-ANOVA for the IR-C and BR-NC condition. In this case we find no significant difference between conditions in the overlapping samples comparison [interaction:
<italic>condition</italic>
 × 
<italic>range</italic>
;
<italic>F</italic>
(1,19) = 3.6,
<italic>p</italic>
 > 0.05 n.s]. Thus, as found above, the symbolic cue leads to a behavior that resembles the performance exhibited for presenting the stimuli in ranges separated by time as in the BR-NC condition.</p>
<p>The
<italic>post hoc</italic>
analysis of the individual conditions supports the results of the condition comparison. The rm-ANOVA of the overlapping samples comparison reveals a significant difference for the estimation of the overlapping samples comparison in the BR-NC condition [main effect:
<italic>range</italic>
(“short” vs. “long”)
<italic>F</italic>
(1,19) = 25.7,
<italic>p</italic>
 < 0.001] but no significant difference of the overlapping samples comparison in the IR-NC condition where no separation between the ranges was provided [main effect:
<italic>range</italic>
(“short” vs. “long”)
<italic>F</italic>
(1,19) = 1.3,
<italic>p</italic>
 > 0.05]. Finally, the symbolic cue in the IR-C condition caused a significant difference in behavior based on the assigned range [overlapping samples comparison: main effect:
<italic>range</italic>
(“short” vs. “long”);
<italic>F</italic>
(1,19) = 9.3,
<italic>p</italic>
 < 0.01].</p>
</sec>
</sec>
<sec>
<title>Modeling</title>
<p>Our results show that the symbolic cue significantly affects the reproduction of the stimuli in a way that is more similar to the behavior in the BR-NC condition than to the one in the IR-C condition. This raises the question how the knowledge about the symbolic cue is incorporated into the estimation process. We compare our two models by fitting them to the responses of each single subject and also to the mean responses over all subjects computed for the overall time course of trials, which we refer to as “group mean.” Figure
<xref ref-type="fig" rid="F5">5</xref>
depicts this group mean and the group mean fits of our models.</p>
<sec>
<title>Categorical model for condition IR-C</title>
<p>The categorical model assumes that the target distances presented in each trial stem from one of two categories, and that the symbolic cue informs about the given category in that trial. The three free parameters of this model, the cue reliability, the measurement weight, and the shift term, were estimated by a least squared fit (group mean fit,
<italic>R</italic>
<sup>2</sup>
 = 0.92:
<italic>p</italic>
<sub>C</sub>
 = 0.74, CI
<sub>95%</sub>
 = [0.70 0.78];
<italic>w</italic>
<sub>m</sub>
 = 0.33;
<italic>Δx</italic>
 = −0.04, CI
<sub>95%</sub>
 = [−0.05 −0.03]; individual participants fit:
<inline-formula>
<mml:math id="M14">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>76</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>13</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [0.57 1.00];
<inline-formula>
<mml:math id="M15">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>32</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>11</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [0.06 0.48];
<inline-formula>
<mml:math id="M16">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:mi>Δ</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>06</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>19</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [−0.67 0.26]). The shift terms were not normally distributed over all subjects (Lillifors test,
<italic>p</italic>
 = 0.02). Yet they show a unimodal distribution with a peak close to the shift corresponding to choosing the median of the posterior distribution as an estimate.</p>
</sec>
<sec>
<title>Cue-combination model for condition IR-C</title>
<p>In contrast to the categorical model, the cue-combination model assumes that target distances are drawn from one underlying distribution and treats the symbolic cue as a second sensory input to the system. Its three free parameters are the measurement weight, the fusion weight, and the shift term. Analogous to the categorical model they were fit using a least squares method (group mean fit,
<italic>R</italic>
<sup>2</sup>
 = 0.91:
<italic>w</italic>
<sub>m</sub>
 = 0.38;
<italic>w</italic>
<sub>fu</sub>
 = 0.55;
<italic>Δx</italic>
 = −0.05, CI
<sub>95%</sub>
 = [−0.07 −0.03]; individual participants fit:
<inline-formula>
<mml:math id="M17">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>39</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>10</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [0.18 0.50];
<inline-formula>
<mml:math id="M18">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>54</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>16</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [0.25 0.81];
<italic>Δx</italic>
 = −0.07 ± 0.19,
<italic>range</italic>
 = [−0.67 0.25]). As in the case of the categorical model, shift terms fitted for the cue-combination model were not normally distributed over all subjects (Lillifors test,
<italic>p</italic>
 = 0.03), yet showed a unimodal distribution with a peak near the shift corresponding to the median.</p>
</sec>
<sec>
<title>Basic iterative model for conditions IR-NC and BR-NC</title>
<p>If the symbolic cue is abandoned, the two new models reduce to the basic iterative model. For comparison, we fitted this model on the two non-cue conditions IR-NC and BR-NC. The model has two free parameters, which have been fitted for each of these two conditions individually (IR-NC group mean fit:
<italic>w</italic>
<sub>m</sub>
 = 0.33;
<italic>Δx</italic>
 = −0.05, CI
<sub>95%</sub>
 = [−0.07 −0.03]; IR-NC individual participants fit:
<inline-formula>
<mml:math id="M19">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>33</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>13</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [0.03 0.48];
<inline-formula>
<mml:math id="M20">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:mi>Δ</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>07</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>22</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [−0.67 0.37]; BR-NC group mean fit:
<italic>w</italic>
<sub>m</sub>
 = 0.34;
<italic>Δx</italic>
 = −0.04, CI
<sub>95%</sub>
 = [−0.06 −0.02]; BR-NC individual participants fit:
<inline-formula>
<mml:math id="M21">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>33</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>09</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [0.14 0.49];
<inline-formula>
<mml:math id="M22">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:mi>Δ</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>04</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>12</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [−0.29 0.26]).</p>
</sec>
<sec>
<title>Model comparison</title>
<p>To compare the categorical and cue-combination model, we computed
<italic>R</italic>
<sup>2</sup>
values for individual participant fits (see Figure
<xref ref-type="fig" rid="F7">7</xref>
) in the IR-C condition (categorical model:
<inline-formula>
<mml:math id="M23">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>54</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>15</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [0.31 0.88]; cue-combination model:
<inline-formula>
<mml:math id="M24">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>54</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>15</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [0.24 0.88]) as well as for the group mean fits (categorical model:
<italic>R</italic>
<sup>2</sup>
 = 0.92; cue-combination model:
<italic>R</italic>
<sup>2</sup>
 = 0.91). In the other two conditions without a symbolic cue, our existing Bayesian estimator model shows similar goodness of the individual participant fits (IR-NC:
<inline-formula>
<mml:math id="M25">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>45</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>18</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [0.05 0.72]; BR-NC:
<inline-formula>
<mml:math id="M26">
<mml:mrow>
<mml:mover accent="false" class="mml-overline">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo accent="true">¯</mml:mo>
</mml:mover>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>51</mml:mn>
<mml:mo class="MathClass-bin">±</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo class="MathClass-punc">.</mml:mo>
<mml:mn>27</mml:mn>
<mml:mo class="MathClass-punc">,</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>range</italic>
 = [−0.40 0.85]). And as in the IR-C condition the group mean fit turns out to be better (IR-NC:
<italic>R</italic>
<sup>2</sup>
 = 0.87; BR-NC:
<italic>R</italic>
<sup>2</sup>
 = 0.88) than the individual estimates.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption>
<p>
<bold>Model Comparison</bold>
. Bar plot of individual
<italic>R</italic>
<sup>2</sup>
values of the model fit for the categorical (gray) and cue-combination model (black) to each subjects’ behavior (1–20) in the IR-C condition. A comparison of the goodness of fit of the categorical and the cue-combination model revealed no significant difference between the two models.</p>
</caption>
<graphic xlink:href="fnint-06-00058-g007"></graphic>
</fig>
<p>In comparing the goodness of fit of the categorical and the cue-combination model (non-parametric Wilcoxon signed rank test), no significant difference between the two models could be found (
<italic>p</italic>
 > 0.45). We also tested whether the small differences of the subject-by-subject
<italic>R</italic>
<sup>2</sup>
values that can be seen in Figure
<xref ref-type="fig" rid="F7">7</xref>
are related to the subjects’ response biases and variances. However, we could not find any significant correlations (Spearman ranks test,
<italic>p</italic>
 > 0.13).</p>
</sec>
</sec>
</sec>
<sec sec-type="discussion">
<title>Discussion</title>
<p>The context in which a stimulus occurs can contain additional relevant information about the stimulus itself. It is thus advantageous to combine all types of available information, in order to use the composite as an estimate of the stimulus. Here we demonstrate that this fusion of information takes place in distance estimation by path integration, where subjects incorporated prior experience and abstract information provided by a symbolic cue into their current estimate of displacement. We proposed two generative Bayesian models that describe this fusion of information based on two distinct assumptions – categorization and cue-combination.</p>
<sec>
<title>Cue-based range and regression effects</title>
<p>The influence of the symbolic cue on distance estimation behavior was assessed by comparing the cue condition (IR-C) to two reference conditions. Both mimicked the two possible extreme cases of cue usage. The no cue condition BR-NC tested two overlapping ranges of stimuli that were blocked in time, in order to change the respective prior experience of subjects and mimicked the case in which the pre-cueing by the words “short” or “long” would lead to a full separation of stimuli into two groups of events or categories. The IR-NC condition combined these two ranges to a single distribution of distances. The order and magnitude of stimuli was exactly the same as in the IR-C condition, thus replicating the cue condition for the case where the symbolic cue would be fully ignored.</p>
<p>In all three experimental conditions we observed a tendency to bias toward certain displacements, also referred to as regression effect (Hollingworth,
<xref ref-type="bibr" rid="B16">1910</xref>
). In the no-cue conditions BR-NC and IR-NC the bias depended on the respective underlying sample distribution and could be explained by incorporation of short-term prior experience into the current estimate of displacements, as shown in our previous study (Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
). The behavior in the cue condition IR-C did not resemble that of the IR-NC condition although the order and size of sample displacements was the same. It was rather reflecting the behavior observed for two distinct sample ranges in the BR-NC condition, even though the effect was smaller.</p>
<p>Thus, the bias in the cue condition cannot be explained exclusively by the use of prior experience. This led to the question of how the additional symbolic cue information is processed. One possible explanation comes from the studies on categorization effects (Huttenlocher et al.,
<xref ref-type="bibr" rid="B17">1991</xref>
; Cheng et al.,
<xref ref-type="bibr" rid="B6">2010</xref>
). If there is uncertainty in the stimulus metric, then information about stimulus categories can be incorporated into the estimation process (Huttenlocher et al.,
<xref ref-type="bibr" rid="B17">1991</xref>
; Feldman et al.,
<xref ref-type="bibr" rid="B15">2009</xref>
). In our case, the symbolic cue could cause a sorting of stimuli into categories such that the expectation about the upcoming stimulus varies depending on whether subjects assume the stimulus to be drawn from the “short” or “long” category. We elaborated on this idea in the categorical model.</p>
<p>Another possible explanation, which we pursued in our cue-combination model, comes from a different field of research – multi-modal sensory cue-combination (Ernst and Banks,
<xref ref-type="bibr" rid="B11">2002</xref>
; Ernst and Bülthoff,
<xref ref-type="bibr" rid="B12">2004</xref>
). Similar to our findings, von Hopffgarten and Bremmer (
<xref ref-type="bibr" rid="B34">2011</xref>
) showed in a recent study on self-motion reproduction that subjects are capable of learning an abstract relationship between a novel cue and the stimulus and exploit that information to improve their performance. In their study, the frequency of a simultaneous auditory signal indicated movement speed and was used by the subjects to improve self-motion reproduction. Their study provides evidence that subjects learned the initially unknown frequency-velocity mapping provided by the auditory cue, comparable to the mapping of the symbolic cue to distance in our present experiment. Von Hopffgarten and Bremmer argued that the observed behavior could be interpreted by “sensory combination” (Ernst and Bülthoff,
<xref ref-type="bibr" rid="B12">2004</xref>
), where the auditory input served as an additional, non-redundant cue.</p>
</sec>
<sec>
<title>Categorical model</title>
<p>The categorical model is based on the assumption that the stimulus comes from one of two distinct, but perhaps overlapping, categories of stimuli, each represented by its own probability distribution (Feldman et al.,
<xref ref-type="bibr" rid="B15">2009</xref>
). Accordingly the symbolic cue provides information about the respective category. The order of events in this generative model is as follows (Figure
<xref ref-type="fig" rid="F1">1</xref>
B): (1) the category is chosen, (2) the information about the category is provided as symbolic cue, and (3) the stimulus is drawn from the distribution corresponding to the category. Note that the symbolic cue does not necessarily provide reliable information about the category. Hence, the prediction of the symbolic cue for a respective category is not always correct. The model represents this uncertainty with a trial-independent probability that we refer to as cue reliability.</p>
<p>Since the categories are unknown, they have to be learned from the symbolic cue values (“short” and “long” in the present experiment) and the stimulus presentation. Note that the semantic interpretation of the cue values is not sufficient to determine the categories, since the cue values do not specify the ranges; they only denote an order within the presented stimuli, i.e., that a “short” distance probably is shorter than a “long” one. Learning is achieved by iterative Bayesian estimation analogous to Petzschner and Glasauer (
<xref ref-type="bibr" rid="B27">2011</xref>
). Our categorical model is thus an extension of the model of Feldman et al. (
<xref ref-type="bibr" rid="B15">2009</xref>
) to explain the so-called perceptual magnet effect in speech perception. In contrast to their model, where no pre-cueing was done and the categories were assumed to be fixed, our model provides the symbolic cue values as additional uncertain information about the category and allows learning of the category means during the course of the experiment. The variance of the prior distributions could also be learned during the experiment (Berniker et al.,
<xref ref-type="bibr" rid="B4">2010</xref>
). However, in the present study we assume that it is, apart from an initialization phase, constant throughout the experiment. For other categorization tasks, such as understanding of speech, it has been proposed that the learning of weighting of acoustic cues for categorization might take place during development (Toscano and McMurray,
<xref ref-type="bibr" rid="B31">2010</xref>
).</p>
<p>The combination of categorical information with the measured stimulus value was also proposed in a model by Huttenlocher et al. (
<xref ref-type="bibr" rid="B17">1991</xref>
) for estimating spatial location. In their model categorical information is used in two distinct ways. First the remembered stimulus measurement is weighted with categorical prototype information and second the resulting estimates are constrained to fall within the category boundaries. In our model estimates are not artificially restricted to certain boundaries, even though the weighting with the learned mean of the respective category will bias them toward this mean. Hence, our estimation process explains the tendency to bias toward the category means, which is reported in a variety of psychophysical studies. This
<italic>central tendency bias</italic>
,
<italic>schema</italic>
, or
<italic>range effect</italic>
, causes a tendency of estimates to be biased toward the category they where assigned to (Hollingworth,
<xref ref-type="bibr" rid="B16">1910</xref>
; Johnson and Vickers,
<xref ref-type="bibr" rid="B19">1987</xref>
; Cheng et al.,
<xref ref-type="bibr" rid="B6">2010</xref>
).</p>
<p>The category model can be extended to an arbitrary number of categories. However, introducing new categories or new cue values during the experiment would not only require learning of that category, but also re-computing of the relative weights of the other categories. In other words, a new category or new cue value should directly affect the other categories.</p>
<p>In the present work the number of categories is predefined and given by the number of cue values, but under many other circumstances this is not the case. Recent work (e.g., Lucas and Griffiths,
<xref ref-type="bibr" rid="B25">2010</xref>
) addresses the question of how we determine the number of categories in the context of learning of causal structures. While this is not required in the present study, our cue-combination model, which is independent of the number of cues, may well be capable of dynamically adapting to new cue values added during the course of the experiment. This could be considered as a weaker form of structural learning.</p>
</sec>
<sec>
<title>Cue-combination model</title>
<p>In contrast to the categorical model, the cue-combination model assumes that the stimulus comes from one continuous range of stimuli and the pre-cueing provides additional evidence about where in this range the current stimulus can be found. This idea is similar to common models in sensory cue-combination, where the sensory inputs from a common source are fused in order to build a unified percept of its origin (Ernst and Banks,
<xref ref-type="bibr" rid="B11">2002</xref>
; Körding et al.,
<xref ref-type="bibr" rid="B21">2007</xref>
). In terms of a generative model, the order of events in the cue-combination model is as follows (Figure
<xref ref-type="fig" rid="F1">1</xref>
C): (1) the stimulus is drawn from the underlying distribution, and (2) the symbolic cue is determined from this stimulus by some mapping. In our current implementation, this mapping is assumed to be probabilistic. Therefore, a large stimulus is assumed to cause the respective symbolic cue value in most of the cases, but at some occasions it can also lead to the other cue value. Since the mapping between stimulus and cue value is not pre-specified, it has to be learned over the course of the experiment. This is achieved by iteratively adapting the mean of the likelihood function associated to each symbolic cue value. In addition to the unknown mapping, the underlying stimulus distribution is learned during the experiment (as in Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
).</p>
<p>A more intuitive explanation of the cue-combination model is provided from the observer point of view. Given the stimulus and an additional corresponding cue one aims to combine these two sources of information in an optimal manner. This would require that the cue can be related to a certain displacement value. This can be achieved by learning the relation between the current stimulus distance and the respective cue on a trial by trial basis. We refer to this process as mapping in the present model.</p>
<p>The mapping of the symbolic cue values to the stimulus dimension does not require knowledge about the possible number of cue values. Rather, the adaptation is similar to cue calibration, e.g., learning the transformation between one stimulus dimension and another (Burge et al.,
<xref ref-type="bibr" rid="B5">2010</xref>
; Zaidel et al.,
<xref ref-type="bibr" rid="B35">2011</xref>
). Thus, in contrast to the category model, adding another symbolic cue value during the experiment would not require a change in the mapping of the previously presented cues. This makes the model more flexible to changes than the categorical model.</p>
</sec>
<sec>
<title>Model comparison</title>
<p>Interestingly, the results of the categorical and cue-combination model are very similar, although the underlying assumptions are substantially different. The categorical model is based on an intuitive assumption about how the stimuli presented to the subjects are generated: it assumes that there are two distinct categories, from which the stimuli are drawn. This corresponds, for example, to the categories in speech production, where a certain syllable is produced or understood based on a distinct category. The cue-combination model does not assume such an underlying structure, but rather treats the symbolic cue as additional modality. Consequently, the cue-combination model is more flexible to changes in cueing while, at least for our experiment, being equally powerful in explaining the data compared to the categorical model. The main reason for the similar performance of both models is, apart from the experimental setting, the iterative updating of the “meaning” associated with the symbolic cue, which leads to very similar sources of information regarding the range of stimuli denoted by the cues. This information is, in both models, weighted by reliability either in form of a variance associated with the symbolic cue or a probability that the symbolic cue is accurate. Thus, both models can fairly well describe the behavior observed in our experiments: our participants used the symbolic cue, they were able to associate them with the stimulus magnitude, but they did not completely trust them, as evidenced by the difference between the IR-C and BR-NC conditions.</p>
<p>Similarly, both models would also have performed equally well in predicting the two outcomes of cue usage mimicked in the IR-NC and BR-NC condition (Figure
<xref ref-type="fig" rid="F3">3</xref>
). If the information provided by the cue would not be incorporated into the estimate of the displacements this would have resulted in a cue weighting close to zero reflected by either a cue reliability that is close to 0.5 in the categorical model or a very high cue variability in the cue-combination model. An extreme cue usage, as mimicked by the BR-NC condition, would have an opposite effect on the respective parameters.</p>
<p>This raises the question under which circumstances the two models would make different predictions. One major difference between the two estimation processes lies in the different means of incorporating prior knowledge. Consider Figures
<xref ref-type="fig" rid="F4">4</xref>
A,B. While the categorical model uses a combined prior that is driven by the occurrence of all respective cues, the cue-combination incorporates a global prior that only depends on short-term prior experience of the stimuli independently of the corresponding cues. We used the parameters derived from the fit of the experimental data in this paper to test how these differences could lead to differing predictions of the cue-combination model and categorical model under specific circumstances.</p>
<p>Imagine the case where the two ranges are clearly separated. Due to the influence of the experience driven prior the cue-combination model would be biased by the full range of all displacements causing a global underestimation in the high range and an overestimation of the short range of stimuli. In contrary, the combined prior in the categorical model would show two discrete peaks at the center of the respective categories and thus lead to an estimate that is, for both ranges, centered closer to the single category means. However, this strong bias in the cue-combination model would only become evident if we assume a constant variance of the prior. If the variance of the prior is also updated on a trial by trial basis (Berniker et al.,
<xref ref-type="bibr" rid="B4">2010</xref>
; Verstynen and Sabes,
<xref ref-type="bibr" rid="B32">2011</xref>
), both models would again become similar.</p>
<p>Yet another case in which both models differ should become obvious when omitting the cue in some catch trials. The cue-combination model would then reduce to the basic model and rely on the global unimodal prior, thus resulting in a global tendency to the mean (Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
). In contrast, the categorical model works with two prior distributions even when the cue is missing. In that case, our categorical model reduces to the category model of Feldman et al. (
<xref ref-type="bibr" rid="B15">2009</xref>
) and would exhibit the perceptual magnet effect, which biases the reproduction toward the category means.</p>
<p>Another difference should be observable in cases where the presentation of the cue and stimulus is not fully randomized. Consider a case where the “long” cue is repeatedly presented in a block with a long displacement. The cue-combination model would show a quick adaption of the global prior to these long displacements, which would result in reproduction values biased toward the long displacements. The categorical model would predict a much weaker adaption to the block as it still incorporates all potential cues, the long as well as the short ones. In that respect the categorical model seems to have a longer memory and less flexibility for fast changes.</p>
<p>Finally, both models become mathematically equivalent for a specific parameter combination. This is the case if the variance of the global prior in the cue-combination model becomes large enough and the cue reliability in the categorical model is set to unity. That is, for the categorical model we have to set
<italic>p</italic>
<sub>C</sub>
 = 1 in Eqs
<xref ref-type="disp-formula" rid="E10">10</xref>
and
<xref ref-type="disp-formula" rid="E12">12</xref>
. For the cue-combination model, we set
<inline-formula>
<mml:math id="M27">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi></mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
in Eq.
<xref ref-type="disp-formula" rid="E20">20</xref>
so that
<italic>w</italic>
<sub>fu</sub>
 = 1. Then the conditional expectations for both models (Eqs
<xref ref-type="disp-formula" rid="E12">12</xref>
and
<xref ref-type="disp-formula" rid="E19">19</xref>
) become equivalent.</p>
</sec>
<sec>
<title>Iterative learning and calibration</title>
<p>The no cue conditions demonstrated that subjects incorporated knowledge about the stimulus history into their current estimate of displacement. We model this iterative learning of prior knowledge by a discrete Kalman filter. In our previous work we showed that this online update of prior experience explains small variation in the data that a fixed prior could not account for (Petzschner and Glasauer,
<xref ref-type="bibr" rid="B27">2011</xref>
). That humans are indeed capable of learning not only the mean but also the variance of an experience driven prior distribution was also recently shown (Berniker et al.,
<xref ref-type="bibr" rid="B4">2010</xref>
; Verstynen and Sabes,
<xref ref-type="bibr" rid="B32">2011</xref>
).</p>
<p>The significant influence of the symbolic cue on the behavioral performance in the cue condition further shows that most subjects also included this information into their estimate of displacement. As mentioned above, the semantic interpretation of the cue values was not sufficient to allow such a fusion of cue values and sensory stimulus. Thus, subjects had to learn how to associate both. The cue-combination model interprets this learning as a mapping of the cue values onto the stimulus dimension. That an abstract, even arbitrary, mapping of different types of information can be acquired during the course of an experiment was also shown by Ernst (
<xref ref-type="bibr" rid="B10">2007</xref>
). In his study subjects were trained with stimuli that usually are unrelated in the world, such as the luminance of an object and its stiffness, but which in the experiment had a fixed mapping. He showed that subjects learned to integrate the two formerly unrelated signals, similar to the mapping in our models. Calibration is, however, not only necessary between unrelated stimulus dimensions, but also between those which are normally related, such as visual and vestibular signals indicating self-motion. A recent study could show that such a calibration is independent of the reliability of the cue (Zaidel et al.,
<xref ref-type="bibr" rid="B35">2011</xref>
), which corresponds to the learning or calibration implemented in our models.</p>
</sec>
</sec>
<sec>
<title>Conclusion</title>
<p>Natural human action and perception profits from the incorporation of contextual information. We show that in addition to the previously found influence of prior experience, humans are also capable of using non-metric information, in the form of a symbolic cue, for their estimate of displacement, even if the mapping of the symbolic cue onto the stimulus dimension has to be acquired during the experiment. Two substantially different models of how this information enters the estimation process led to equally good fits to the experimental data. This result sheds new light on the modeling of behavioral problems such as categorization, cue-combination, and trial-to-trial dependencies.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>This research was supported by the BMBF (grants IFB 01EO0901 and BCCN 01GQ0440). We thank Virginia Flanagin and Paul MacNeilage for valuable comments on a previous version of the manuscript.</p>
</ack>
<app-group>
<app id="A1">
<title>Appendix</title>
<sec>
<title>Categorical model</title>
<p>Our categorical model shall infer the target distance
<italic>T</italic>
from given measurement
<italic>s</italic>
and cue
<italic>c
<sub>j</sub>
</italic>
. Here we derive the posterior distribution over
<italic>T</italic>
from known distributions, along with its conditional expectation. The posterior is given as</p>
<disp-formula id="E27">
<label>(A1)</label>
<mml:math id="M54">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>We have to marginalize over the categories because they are unknown. The key idea is now to express the posterior as a weighted sum over distributions of which we can easily compute the expectation.</p>
<p>We first factorize the posterior within the sum to obtain a distribution of which we can easily compute its expectation. As it turns out, it is the posterior of
<italic>T</italic>
given
<italic>S</italic>
and the category.</p>
<disp-formula id="E28">
<label>(A2)</label>
<mml:math id="M55">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Due to model assumptions concerning the factorization, the posterior of
<italic>T</italic>
does not depend on
<italic>C</italic>
once the category
<italic>A</italic>
is given. The full joint distribution for our model, according to our assumptions (Figure
<xref ref-type="fig" rid="F1">1</xref>
B), factorizes as follows:</p>
<disp-formula id="E29">
<label>(A3)</label>
<mml:math id="M56">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>We use this factorization of the full joint marginalize out
<italic>T</italic>
.</p>
<disp-formula id="E30">
<label>(A4)</label>
<mml:math id="M57">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mo class="MathClass-op"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-op"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Then, by applying Bayes’ theorem we, see</p>
<disp-formula id="E31">
<label>(A5)</label>
<mml:math id="M58">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>We, see that all factors depending on
<italic>C</italic>
cancel each other out. The posterior thus does not depend on
<italic>C</italic>
, given
<italic>A</italic>
. Following the definition of the expectation, we now have</p>
<disp-formula id="E32">
<label>(A6)</label>
<mml:math id="M59">
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mo class="MathClass-op"></mml:mo>
<mml:mi>t</mml:mi>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mo class="MathClass-op"></mml:mo>
<mml:mi>t</mml:mi>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
</mml:math>
</disp-formula>
<p>with
<italic>s</italic>
and
<italic>c
<sub>j</sub>
</italic>
being the known distance measurement and the known cue. We, see that the posterior of the category
<italic>P</italic>
(
<italic>a
<sub>i</sub>
</italic>
 | 
<italic>s</italic>
,
<italic>c
<sub>j</sub>
</italic>
) does not depend on
<italic>t</italic>
, which means we can do the following reordering. We pull
<italic>t</italic>
into the sum, exchange sum and integral and pull out the posterior of the category:</p>
<disp-formula id="E33">
<label>(A7)</label>
<mml:math id="M60">
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo></mml:mo>
<mml:mi>t</mml:mi>
<mml:mspace width="1em" class="nbsp"></mml:mspace>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>The integral expresses the expectation of the category-dependent posterior of
<italic>T</italic>
. Thus we have</p>
<disp-formula id="E34">
<label>(A8)</label>
<mml:math id="M61">
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>We will now express the posterior of the category, the weights in the above sum, through known distributions.</p>
<disp-formula id="E35">
<label>(A9)</label>
<mml:math id="M62">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:munder class="msub">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>We reused the factorization of
<italic>P</italic>
(
<italic>A</italic>
,
<italic>S</italic>
,
<italic>C</italic>
) derived above. In the denominator we use it to marginalize out
<italic>A</italic>
. All distributions appearing in the result are known from our model assumptions, except
<italic>P</italic>
(
<italic>S</italic>
 | 
<italic>A</italic>
). It results from integrating over
<italic>t</italic>
:</p>
<disp-formula id="E36">
<label>(A10)</label>
<mml:math id="M63">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mo class="MathClass-op"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mo class="MathClass-op"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Since both
<italic>P</italic>
(
<italic>S</italic>
 | 
<italic>t</italic>
) and
<italic>P</italic>
(
<italic>t</italic>
 | 
<italic>A</italic>
) are normally distributed, integrating over
<italic>t</italic>
yield the following distribution:</p>
<disp-formula id="E37">
<label>(A11)</label>
<mml:math id="M64">
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>N</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Remember that we assume equal variances for all categories.</p>
<p>A special case of this model is if no cue is present. This corresponds to a case where all cues appear with equal probability independently of the given category,
<italic>P</italic>
(
<italic>c
<sub>j</sub>
</italic>
 | 
<italic>a
<sub>i</sub>
</italic>
) = 
<italic>1/n</italic>
, and thus tell us nothing about the category. This leads to the posterior of
<italic>A</italic>
becoming independent of
<italic>C</italic>
:</p>
<disp-formula id="E38">
<label>(A12)</label>
<mml:math id="M65">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo></mml:mo>
<mml:mi>k</mml:mi>
</mml:munder>
<mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<p>With the posterior of the category now being independent of
<italic>C</italic>
, the expectation for
<italic>T</italic>
given measurement and cue in Eq. 33 above reduces to Eq. 29 in Feldman et al. (
<xref ref-type="bibr" rid="B15">2009</xref>
). The Feldman model is thus a special case of our model.</p>
<p>It remains to compute the expectations of the category-dependent posteriors,
<italic>E</italic>
[
<italic>T</italic>
 | 
<italic>s</italic>
,
<italic>a
<sub>i</sub>
</italic>
]. These posteriors result from standard Bayesian fusion of the likelihood for
<italic>S</italic>
and the prior
<italic>P</italic>
(
<italic>T</italic>
 | 
<italic>A</italic>
), as we have shown above:</p>
<disp-formula id="E39">
<label>(A13)</label>
<mml:math id="M66">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Since both the likelihood
<italic>P</italic>
(
<italic>S</italic>
 | 
<italic>T</italic>
) and the prior
<italic>P</italic>
(
<italic>T</italic>
 | 
<italic>A</italic>
) are normally distributed, the posterior is normally distributed with mean and variance</p>
<disp-formula id="E40">
<label>(A14)</label>
<mml:math id="M67">
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>That allows us to write the expectation as</p>
<disp-formula id="E41">
<label>(A15)</label>
<mml:math id="M68">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mfrac>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>This can be further simplified using another two of our model assumptions. First, we assume that,
<italic>a priori</italic>
, categories are uniformly distributed, that is
<italic>P</italic>
(
<italic>a
<sub>i</sub>
</italic>
) = 
<italic>1/n</italic>
. Second, we assume that the correct cue appears with some probability
<italic>P</italic>
(
<italic>c
<sub>j</sub>
</italic>
 | 
<italic>a
<sub>i</sub>
</italic>
) = 
<italic>p</italic>
<sub>C</sub>
,
<italic>j</italic>
 = 
<italic>i</italic>
, while the remaining wrong cues appear with equal probabilities
<italic>P</italic>
(
<italic>c
<sub>j</sub>
</italic>
 | 
<italic>a
<sub>i</sub>
</italic>
) = (1 − 
<italic>p</italic>
<sub>C</sub>
)/(
<italic>n</italic>
–1). First we rewrite the posterior of the category:</p>
<disp-formula id="E42">
<label>(A16)</label>
<mml:math id="M69">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo></mml:mo>
<mml:mi>k</mml:mi>
</mml:munder>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext>     </mml:mtext>
<mml:mo>=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>C</mml:mi>
</mml:msub>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>C</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo></mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext>     </mml:mtext>
<mml:mo>=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>C</mml:mi>
</mml:msub>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>C</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo></mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>α</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>This results in</p>
<disp-formula id="E43">
<label>(A17)</label>
<mml:math id="M70">
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>Then we can further rewrite by again replacing
<italic>P</italic>
(
<italic>c
<sub>j</sub>
</italic>
 | 
<italic>a
<sub>i</sub>
</italic>
) to get</p>
<disp-formula id="E44">
<label>(A18)</label>
<mml:math id="M71">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo class="MathClass-open">(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mo class="MathClass-close">)</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>C</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi>α</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</sec>
<sec>
<title>Cue-combination model</title>
<p>Our assumptions about the (conditional) distributions of target distance
<italic>T</italic>
, stimulus
<italic>S</italic>
and mapped cue
<italic>C</italic>
<sub>mp</sub>
(the cue signal) lead to the following factorization of the model’s full joint probability:</p>
<disp-formula id="E45">
<label>(A19)</label>
<mml:math id="M72">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>From this, the posterior follows immediately:</p>
<disp-formula id="E46">
<label>(A20)</label>
<mml:math id="M73">
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>α</mml:mi>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>P</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>We can, see that the posterior density function is, apart from the proportionality factor α = 1/
<italic>P</italic>
(
<italic>S</italic>
,
<italic>C</italic>
<sub>mp</sub>
), a product of Gaussians. First, we combine the two likelihood density functions. Following the product rule for Gaussians, this product yields a Gaussian with the following parameters:</p>
<disp-formula id="E47">
<label>(A21)</label>
<mml:math id="M74">
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mi>c</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>In the indices we write
<italic>C</italic>
instead of
<italic>C</italic>
<sub>mp</sub>
for brevity and better readability, e.g., μ
<italic>
<sub>CS</sub>
</italic>
instead of
<inline-formula>
<mml:math id="M79">
<mml:mrow>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mtext>mp</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
. A problem is that the (unknown) mean of the symbolic cue likelihood, μ
<sub>C</sub>
(
<italic>T</italic>
), depends non-linearly on
<italic>T</italic>
. However, we may assume that this dependence is approximately linear. Remember that our model receives as input a discrete cue, which steers a calibration process (implemented by a Kalman filter), whose output in each trial is then interpreted as additional measurement
<italic>c</italic>
<sub>mp</sub>
of the mapped cue. This output of the calibration process closely follows the stimuli from either the long or the short range, depending on the discrete cue. The ranges themselves do not change and therefore a normal distribution with fixed μ
<sub>C</sub>
(
<italic>T</italic>
) (after a short calibration period) can approximate the dispersion of the
<italic>c</italic>
<sub>mp</sub>
values.</p>
<p>The product of the combined likelihood density function with the density of the prior
<italic>P</italic>
(
<italic>T</italic>
) is again a product of Gaussians resulting in a Gaussian. The density of the posterior is thus Gaussian with parameters</p>
<disp-formula id="E48">
<label>(A22)</label>
<mml:math id="M75">
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>posterior</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>posterior</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">.</mml:mo>
</mml:math>
</disp-formula>
<p>According to standard probability theory, the expectation of the posterior is then given as</p>
<disp-formula id="E49">
<label>(A23)</label>
<mml:math id="M76">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>E</mml:mi>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>posterior</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mfrac>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>mp</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>with the weights</p>
<disp-formula id="E50">
<label>(A24)</label>
<mml:math id="M77">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>fu</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>m</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</sec>
</app>
</app-group>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Adams</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Graf</surname>
<given-names>E. W.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Experience can change the ‘light-from-above’ prior</article-title>
.
<source>Nat. Neurosci.</source>
<volume>7</volume>
,
<fpage>1057</fpage>
<lpage>1058</lpage>
<pub-id pub-id-type="doi">10.1038/nn1312</pub-id>
<pub-id pub-id-type="pmid">15361877</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Angelaki</surname>
<given-names>D. E.</given-names>
</name>
<name>
<surname>Gu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>G. C.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Multisensory integration: psychophysics, neurophysiology, and computation</article-title>
.
<source>Curr. Opin. Neurobiol.</source>
<volume>19</volume>
,
<fpage>452</fpage>
<lpage>458</lpage>
<pub-id pub-id-type="doi">10.1016/j.conb.2009.06.008</pub-id>
<pub-id pub-id-type="pmid">19616425</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Battaglia</surname>
<given-names>P. W.</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Aslin</surname>
<given-names>R. N.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Bayesian integration of visual and auditory signals for spatial localization</article-title>
.
<source>J. Opt. Soc. Am. A. Opt. Image Sci. Vis.</source>
<volume>20</volume>
,
<fpage>1391</fpage>
<lpage>1397</lpage>
<pub-id pub-id-type="doi">10.1364/JOSAA.20.001391</pub-id>
<pub-id pub-id-type="pmid">12868643</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Berniker</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Voss</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Körding</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Learning priors of Bayesian computations in the nervous system</article-title>
.
<source>PLoS ONE</source>
<volume>5</volume>
,
<fpage>e12686</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0012686</pub-id>
<pub-id pub-id-type="pmid">20844766</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burge</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Girshick</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Visual-haptic adaptation is determined by relative reliability</article-title>
.
<source>J. Neurosci.</source>
<volume>30</volume>
,
<fpage>7714</fpage>
<lpage>7721</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.6427-09.2010</pub-id>
<pub-id pub-id-type="pmid">20519546</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cheng</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Spetch</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Hoan</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Categories and range effects in human spatial memory</article-title>
.
<source>Front. Psychol.</source>
<volume>1</volume>
:
<fpage>231</fpage>
<pub-id pub-id-type="doi">10.3389/fpsyg.2010.00231</pub-id>
<pub-id pub-id-type="pmid">21833286</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Davidoff</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Davies</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Roberson</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Colour categories in a stone-age tribe</article-title>
.
<source>Nature</source>
<volume>398</volume>
,
<fpage>203</fpage>
<lpage>204</lpage>
<pub-id pub-id-type="doi">10.1038/18335</pub-id>
<pub-id pub-id-type="pmid">10094043</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dehaene</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The neural basis of the Weber-Fechner law: a logarithmic mental number line</article-title>
.
<source>Trends Cogn Sci. (Regul. Ed.)</source>
<volume>7</volume>
,
<fpage>145</fpage>
<lpage>147</lpage>
<pub-id pub-id-type="doi">10.1016/S1364-6613(03)00055-X</pub-id>
<pub-id pub-id-type="pmid">12691758</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Durgin</surname>
<given-names>F. H.</given-names>
</name>
<name>
<surname>Akagi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Gallistel</surname>
<given-names>C. R.</given-names>
</name>
<name>
<surname>Haiken</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The precision of locomotor odometry in humans</article-title>
.
<source>Exp. Brain Res.</source>
<volume>193</volume>
,
<fpage>429</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-008-1640-1</pub-id>
<pub-id pub-id-type="pmid">19030852</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Learning to integrate arbitrary signals from vision and touch</article-title>
.
<source>J. Vis.</source>
<volume>7</volume>
,
<fpage>1</fpage>
<lpage>14</lpage>
<pub-id pub-id-type="doi">10.1167/7.12.1</pub-id>
<pub-id pub-id-type="pmid">18217847</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Merging the senses into a robust percept</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed.)</source>
<volume>8</volume>
,
<fpage>162</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2004.02.002</pub-id>
<pub-id pub-id-type="pmid">15050512</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Etcoff</surname>
<given-names>N. L.</given-names>
</name>
<name>
<surname>Magee</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Categorical perception of facial expressions</article-title>
.
<source>Cognition</source>
<volume>44</volume>
,
<fpage>227</fpage>
<lpage>240</lpage>
<pub-id pub-id-type="doi">10.1016/0010-0277(92)90002-Y</pub-id>
<pub-id pub-id-type="pmid">1424493</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Fechner</surname>
<given-names>G.T.</given-names>
</name>
</person-group>
(
<year>1860</year>
).
<source>Elemente der Psychophysik</source>
.
<publisher-loc>Leipzig</publisher-loc>
:
<publisher-name>Breitkopf and Härtel</publisher-name>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Feldman</surname>
<given-names>N. H.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. L.</given-names>
</name>
<name>
<surname>Morgan</surname>
<given-names>J. L.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The influence of categories on perception: explaining the perceptual magnet effect as optimal statistical inference</article-title>
.
<source>Psychol. Rev.</source>
<volume>116</volume>
,
<fpage>752</fpage>
<lpage>782</lpage>
<pub-id pub-id-type="doi">10.1037/a0017196</pub-id>
<pub-id pub-id-type="pmid">19839683</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hollingworth</surname>
<given-names>H. L.</given-names>
</name>
</person-group>
(
<year>1910</year>
).
<article-title>The central tendency of judgment</article-title>
.
<source>J. Philos. Psychol. Sci. Methods</source>
<volume>7</volume>
,
<fpage>461</fpage>
<lpage>469</lpage>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huttenlocher</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hedges</surname>
<given-names>L. V.</given-names>
</name>
<name>
<surname>Duncan</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Categories and particulars: prototype effects in estimating spatial location</article-title>
.
<source>Psychol. Rev.</source>
<volume>98</volume>
,
<fpage>352</fpage>
<lpage>376</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.98.3.352</pub-id>
<pub-id pub-id-type="pmid">1891523</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jacobs</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Optimal integration of texture and motion cues to depth</article-title>
.
<source>Vision Res.</source>
<volume>39</volume>
,
<fpage>3621</fpage>
<lpage>3629</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(99)00088-7</pub-id>
<pub-id pub-id-type="pmid">10746132</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnson</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Vickers</surname>
<given-names>Z.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>Avoiding the centering bias or range effect when determining an optimum level of sweetness in lemonade</article-title>
.
<source>J. Sens. Stud.</source>
<volume>2</volume>
,
<fpage>283</fpage>
<lpage>292</lpage>
<pub-id pub-id-type="doi">10.1111/j.1745-459X.1987.tb00423.x</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jürgens</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Becker</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Perception of angular displacement without landmarks: evidence for Bayesian fusion of vestibular, optokinetic, podokinesthetic, and cognitive information</article-title>
.
<source>Exp. Brain Res.</source>
<volume>174</volume>
,
<fpage>528</fpage>
<lpage>543</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-006-0486-7</pub-id>
<pub-id pub-id-type="pmid">16832684</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>K. P.</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Quartz</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Causal inference in multisensory perception</article-title>
.
<source>PLoS ONE</source>
<volume>2</volume>
,
<fpage>e943</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0000943</pub-id>
<pub-id pub-id-type="pmid">17895984</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>K. P.</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D. M.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The loss function of sensorimotor learning</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>101</volume>
,
<fpage>9839</fpage>
<lpage>9842</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.0308394101</pub-id>
<pub-id pub-id-type="pmid">15210973</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Langer</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>A prior for global convexity in local shape-from-shading</article-title>
.
<source>Perception</source>
<volume>30</volume>
,
<fpage>403</fpage>
<lpage>410</lpage>
<pub-id pub-id-type="doi">10.1068/p3178</pub-id>
<pub-id pub-id-type="pmid">11383189</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liberman</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Harris</surname>
<given-names>K. S.</given-names>
</name>
<name>
<surname>Hoffman</surname>
<given-names>H. S.</given-names>
</name>
<name>
<surname>Griffith</surname>
<given-names>B. C.</given-names>
</name>
</person-group>
(
<year>1957</year>
).
<article-title>The discrimination of speech sounds within and across phoneme boundaries</article-title>
.
<source>J. Exp. Psychol.</source>
<volume>54</volume>
,
<fpage>358</fpage>
<lpage>368</lpage>
<pub-id pub-id-type="doi">10.1037/h0044417</pub-id>
<pub-id pub-id-type="pmid">13481283</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lucas</surname>
<given-names>C. G.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. L.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Learning the form of causal relationships using hierarchical Bayesian models</article-title>
.
<source>Cogn. Sci.</source>
<volume>34</volume>
,
<fpage>113</fpage>
<lpage>147</lpage>
<pub-id pub-id-type="pmid">21564208</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Müller</surname>
<given-names>H. J.</given-names>
</name>
<name>
<surname>Reimann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Krummenacher</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Visual search for singleton feature targets across dimensions: stimulus- and expectancy-driven effects in dimensional weighting</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform.</source>
<volume>29</volume>
,
<fpage>1021</fpage>
<lpage>1035</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.29.5.1021</pub-id>
<pub-id pub-id-type="pmid">14585020</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Petzschner</surname>
<given-names>F. H.</given-names>
</name>
<name>
<surname>Glasauer</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Iterative Bayesian estimation as an explanation for range and regression effects: a study on human path integration</article-title>
.
<source>J. Neurosci.</source>
<volume>31</volume>
,
<fpage>17220</fpage>
<lpage>17229</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2028-11.2011</pub-id>
<pub-id pub-id-type="pmid">22114288</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stevens</surname>
<given-names>S. S.</given-names>
</name>
</person-group>
(
<year>1961</year>
).
<article-title>To honor Fechner and repeal his law: a power function, not a log function, describes the operating characteristic of a sensory system</article-title>
.
<source>Science</source>
<volume>133</volume>
,
<fpage>80</fpage>
<lpage>86</lpage>
<pub-id pub-id-type="doi">10.1126/science.133.3446.80</pub-id>
<pub-id pub-id-type="pmid">17769332</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stocker</surname>
<given-names>A. A.</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>E. P.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Noise characteristics and prior expectations in human visual speed perception</article-title>
.
<source>Nat. Neurosci.</source>
<volume>9</volume>
,
<fpage>578</fpage>
<lpage>585</lpage>
<pub-id pub-id-type="doi">10.1038/nn1669</pub-id>
<pub-id pub-id-type="pmid">16547513</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stone</surname>
<given-names>J. V.</given-names>
</name>
<name>
<surname>Kerrigan</surname>
<given-names>I. S.</given-names>
</name>
<name>
<surname>Porrill</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Where is the light? Bayesian perceptual priors for lighting direction</article-title>
.
<source>Proc. Biol. Sci.</source>
<volume>276</volume>
,
<fpage>1797</fpage>
<lpage>1804</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.2008.1635</pub-id>
<pub-id pub-id-type="pmid">19324801</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Toscano</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>McMurray</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Cue integration with categories: weighting acoustic cues in speech using unsupervised learning and distributional statistics</article-title>
.
<source>Cogn. Sci.</source>
<volume>34</volume>
,
<fpage>434</fpage>
<lpage>464</lpage>
<pub-id pub-id-type="doi">10.1111/j.1551-6709.2009.01077.x</pub-id>
<pub-id pub-id-type="pmid">21339861</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Verstynen</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Sabes</surname>
<given-names>P. N.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>How each movement changes the next: an experimental and theoretical study of fast adaptive priors in reaching</article-title>
.
<source>J. Neurosci.</source>
<volume>31</volume>
,
<fpage>10050</fpage>
<lpage>10059</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.6525-10.2011</pub-id>
<pub-id pub-id-type="pmid">21734297</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vincent</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Covert visual search: prior beliefs are optimally combined with sensory evidence</article-title>
.
<source>J. Vis.</source>
<volume>11</volume>
,
<fpage>25</fpage>
<pub-id pub-id-type="doi">10.1167/11.13.25</pub-id>
<pub-id pub-id-type="pmid">22131446</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>von Hopffgarten</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bremmer</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Self-motion reproduction can be affected by associated auditory cues</article-title>
.
<source>Seeing Perceiving</source>
<volume>24</volume>
,
<fpage>203</fpage>
<lpage>222</lpage>
<pub-id pub-id-type="doi">10.1163/187847511X571005</pub-id>
<pub-id pub-id-type="pmid">21864463</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zaidel</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>A. H.</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>D. E.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Multisensory calibration is independent of cue reliability</article-title>
.
<source>J. Neurosci.</source>
<volume>31</volume>
,
<fpage>13949</fpage>
<lpage>13962</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2732-11.2011</pub-id>
<pub-id pub-id-type="pmid">21957256</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
</country>
</list>
<tree>
<country name="Allemagne">
<noRegion>
<name sortKey="Petzschner, Frederike H" sort="Petzschner, Frederike H" uniqKey="Petzschner F" first="Frederike H." last="Petzschner">Frederike H. Petzschner</name>
</noRegion>
<name sortKey="Glasauer, Stefan" sort="Glasauer, Stefan" uniqKey="Glasauer S" first="Stefan" last="Glasauer">Stefan Glasauer</name>
<name sortKey="Glasauer, Stefan" sort="Glasauer, Stefan" uniqKey="Glasauer S" first="Stefan" last="Glasauer">Stefan Glasauer</name>
<name sortKey="Glasauer, Stefan" sort="Glasauer, Stefan" uniqKey="Glasauer S" first="Stefan" last="Glasauer">Stefan Glasauer</name>
<name sortKey="Glasauer, Stefan" sort="Glasauer, Stefan" uniqKey="Glasauer S" first="Stefan" last="Glasauer">Stefan Glasauer</name>
<name sortKey="Maier, Paul" sort="Maier, Paul" uniqKey="Maier P" first="Paul" last="Maier">Paul Maier</name>
<name sortKey="Maier, Paul" sort="Maier, Paul" uniqKey="Maier P" first="Paul" last="Maier">Paul Maier</name>
<name sortKey="Petzschner, Frederike H" sort="Petzschner, Frederike H" uniqKey="Petzschner F" first="Frederike H." last="Petzschner">Frederike H. Petzschner</name>
<name sortKey="Petzschner, Frederike H" sort="Petzschner, Frederike H" uniqKey="Petzschner F" first="Frederike H." last="Petzschner">Frederike H. Petzschner</name>
<name sortKey="Petzschner, Frederike H" sort="Petzschner, Frederike H" uniqKey="Petzschner F" first="Frederike H." last="Petzschner">Frederike H. Petzschner</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001815 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001815 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:3417299
   |texte=   Combining Symbolic Cues with Sensory Input and Prior Experience in an Iterative Bayesian Framework
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:22905024" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024