Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

How Haptic Size Sensations Improve Distance Perception

Identifieur interne : 002179 ( Pmc/Curation ); précédent : 002178; suivant : 002180

How Haptic Size Sensations Improve Distance Perception

Auteurs : Peter W. Battaglia [États-Unis] ; Daniel Kersten [États-Unis] ; Paul R. Schrater [États-Unis]

Source :

RBID : PMC:3127804

Abstract

Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual “posterior sampling”. In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information.


Url:
DOI: 10.1371/journal.pcbi.1002080
PubMed: 21738457
PubMed Central: 3127804

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3127804

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">How Haptic Size Sensations Improve Distance Perception</title>
<author>
<name sortKey="Battaglia, Peter W" sort="Battaglia, Peter W" uniqKey="Battaglia P" first="Peter W." last="Battaglia">Peter W. Battaglia</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>BCS and CSAIL, MIT, Cambridge, Massachusetts, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>BCS and CSAIL, MIT, Cambridge, Massachusetts</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Kersten, Daniel" sort="Kersten, Daniel" uniqKey="Kersten D" first="Daniel" last="Kersten">Daniel Kersten</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Psychology, University of Minnesota, Minneapolis, Minnesota</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Schrater, Paul R" sort="Schrater, Paul R" uniqKey="Schrater P" first="Paul R." last="Schrater">Paul R. Schrater</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<addr-line>Psychology and Computer Science, University of Minnesota, Minneapolis, Minnesota, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Psychology and Computer Science, University of Minnesota, Minneapolis, Minnesota</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">21738457</idno>
<idno type="pmc">3127804</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3127804</idno>
<idno type="RBID">PMC:3127804</idno>
<idno type="doi">10.1371/journal.pcbi.1002080</idno>
<date when="2011">2011</date>
<idno type="wicri:Area/Pmc/Corpus">002179</idno>
<idno type="wicri:Area/Pmc/Curation">002179</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">How Haptic Size Sensations Improve Distance Perception</title>
<author>
<name sortKey="Battaglia, Peter W" sort="Battaglia, Peter W" uniqKey="Battaglia P" first="Peter W." last="Battaglia">Peter W. Battaglia</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>BCS and CSAIL, MIT, Cambridge, Massachusetts, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>BCS and CSAIL, MIT, Cambridge, Massachusetts</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Kersten, Daniel" sort="Kersten, Daniel" uniqKey="Kersten D" first="Daniel" last="Kersten">Daniel Kersten</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Psychology, University of Minnesota, Minneapolis, Minnesota</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Schrater, Paul R" sort="Schrater, Paul R" uniqKey="Schrater P" first="Paul R." last="Schrater">Paul R. Schrater</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<addr-line>Psychology and Computer Science, University of Minnesota, Minneapolis, Minnesota, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Psychology and Computer Science, University of Minnesota, Minneapolis, Minnesota</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS Computational Biology</title>
<idno type="ISSN">1553-734X</idno>
<idno type="eISSN">1553-7358</idno>
<imprint>
<date when="2011">2011</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual “posterior sampling”. In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Ittelson, W" uniqKey="Ittelson W">W Ittelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yonas, A" uniqKey="Yonas A">A Yonas</name>
</author>
<author>
<name sortKey="Pettersen, L" uniqKey="Pettersen L">L Pettersen</name>
</author>
<author>
<name sortKey="Granrud, C" uniqKey="Granrud C">C Granrud</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mershon, D" uniqKey="Mershon D">D Mershon</name>
</author>
<author>
<name sortKey="Gogel, W" uniqKey="Gogel W">W Gogel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, P" uniqKey="Battaglia P">P Battaglia</name>
</author>
<author>
<name sortKey="Schrater, P" uniqKey="Schrater P">P Schrater</name>
</author>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D Kersten</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, D" uniqKey="Knill D">D Knill</name>
</author>
<author>
<name sortKey="Richards, W" uniqKey="Richards W">W Richards</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D Kersten</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P Mamassian</name>
</author>
<author>
<name sortKey="Yuille, A" uniqKey="Yuille A">A Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, D" uniqKey="Knill D">D Knill</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koerding, K" uniqKey="Koerding K">K Koerding</name>
</author>
<author>
<name sortKey="Wolpert, D" uniqKey="Wolpert D">D Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pearl, J" uniqKey="Pearl J">J Pearl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, P" uniqKey="Battaglia P">P Battaglia</name>
</author>
<author>
<name sortKey="Jacobs, R" uniqKey="Jacobs R">R Jacobs</name>
</author>
<author>
<name sortKey="Aslin, R" uniqKey="Aslin R">R Aslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sundareswara, R" uniqKey="Sundareswara R">R Sundareswara</name>
</author>
<author>
<name sortKey="Schrater, P" uniqKey="Schrater P">P Schrater</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vul, E" uniqKey="Vul E">E Vul</name>
</author>
<author>
<name sortKey="Goodman, N" uniqKey="Goodman N">N Goodman</name>
</author>
<author>
<name sortKey="Griffths, T" uniqKey="Griffths T">T Griffths</name>
</author>
<author>
<name sortKey="Tenenbaum, J" uniqKey="Tenenbaum J">J Tenenbaum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wozny, D" uniqKey="Wozny D">D Wozny</name>
</author>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U Beierholm</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, P" uniqKey="Battaglia P">P Battaglia</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clark, J" uniqKey="Clark J">J Clark</name>
</author>
<author>
<name sortKey="Yuille, A" uniqKey="Yuille A">A Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gelman, A" uniqKey="Gelman A">A Gelman</name>
</author>
<author>
<name sortKey="Carlin, J" uniqKey="Carlin J">J Carlin</name>
</author>
<author>
<name sortKey="Stern, H" uniqKey="Stern H">H Stern</name>
</author>
<author>
<name sortKey="Rubin, D" uniqKey="Rubin D">D Rubin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spiegelhalter, D" uniqKey="Spiegelhalter D">D Spiegelhalter</name>
</author>
<author>
<name sortKey="Best, N" uniqKey="Best N">N Best</name>
</author>
<author>
<name sortKey="Carlin, B" uniqKey="Carlin B">B Carlin</name>
</author>
<author>
<name sortKey="Van Der Linde, A" uniqKey="Van Der Linde A">A Van der Linde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mckee, S" uniqKey="Mckee S">S Mckee</name>
</author>
<author>
<name sortKey="Welch, L" uniqKey="Welch L">L Welch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ono, H" uniqKey="Ono H">H Ono</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M" uniqKey="Ernst M">M Ernst</name>
</author>
<author>
<name sortKey="Banks, M" uniqKey="Banks M">M Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, R" uniqKey="Van Beers R">R van Beers</name>
</author>
<author>
<name sortKey="Haggard, P" uniqKey="Haggard P">P Haggard</name>
</author>
<author>
<name sortKey="Wolpert, D" uniqKey="Wolpert D">D Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maloney, L" uniqKey="Maloney L">L Maloney</name>
</author>
<author>
<name sortKey="Landy, M" uniqKey="Landy M">M Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gerrits, H" uniqKey="Gerrits H">H Gerrits</name>
</author>
<author>
<name sortKey="Vendrik, A" uniqKey="Vendrik A">A Vendrik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Epstein, W" uniqKey="Epstein W">W Epstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boring, E" uniqKey="Boring E">E Boring</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kilpatrick, F" uniqKey="Kilpatrick F">F Kilpatrick</name>
</author>
<author>
<name sortKey="Ittelson, W" uniqKey="Ittelson W">W Ittelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Epstein, W" uniqKey="Epstein W">W Epstein</name>
</author>
<author>
<name sortKey="Park, J" uniqKey="Park J">J Park</name>
</author>
<author>
<name sortKey="Casey, A" uniqKey="Casey A">A Casey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gogel, W" uniqKey="Gogel W">W Gogel</name>
</author>
<author>
<name sortKey="Wist, E" uniqKey="Wist E">E Wist</name>
</author>
<author>
<name sortKey="Harker, G" uniqKey="Harker G">G Harker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ono, H" uniqKey="Ono H">H Ono</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weintraub, D" uniqKey="Weintraub D">D Weintraub</name>
</author>
<author>
<name sortKey="Gardner, G" uniqKey="Gardner G">G Gardner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brenner, E" uniqKey="Brenner E">E Brenner</name>
</author>
<author>
<name sortKey="Van Damme, W" uniqKey="Van Damme W">W van Damme</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, P" uniqKey="Battaglia P">P Battaglia</name>
</author>
<author>
<name sortKey="Di Luca, M" uniqKey="Di Luca M">M Di Luca</name>
</author>
<author>
<name sortKey="Ernst, M" uniqKey="Ernst M">M Ernst</name>
</author>
<author>
<name sortKey="Schrater, P" uniqKey="Schrater P">P Schrater</name>
</author>
<author>
<name sortKey="Machulla, T" uniqKey="Machulla T">T Machulla</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roach, N" uniqKey="Roach N">N Roach</name>
</author>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J Heron</name>
</author>
<author>
<name sortKey="Mcgraw, P" uniqKey="Mcgraw P">P McGraw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M" uniqKey="Ernst M">M Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koerding, K" uniqKey="Koerding K">K Koerding</name>
</author>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U Beierholm</name>
</author>
<author>
<name sortKey="Ma, W" uniqKey="Ma W">W Ma</name>
</author>
<author>
<name sortKey="Quartz, S" uniqKey="Quartz S">S Quartz</name>
</author>
<author>
<name sortKey="Tenenbaum, J" uniqKey="Tenenbaum J">J Tenenbaum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sato, Y" uniqKey="Sato Y">Y Sato</name>
</author>
<author>
<name sortKey="Toyoizumi, T" uniqKey="Toyoizumi T">T Toyoizumi</name>
</author>
<author>
<name sortKey="Aihara, K" uniqKey="Aihara K">K Aihara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schrater, P" uniqKey="Schrater P">P Schrater</name>
</author>
<author>
<name sortKey="Sundareswara, R" uniqKey="Sundareswara R">R Sundareswara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fiser, J" uniqKey="Fiser J">J Fiser</name>
</author>
<author>
<name sortKey="Berkes, P" uniqKey="Berkes P">P Berkes</name>
</author>
<author>
<name sortKey="Orban, G" uniqKey="Orban G">G Orbán</name>
</author>
<author>
<name sortKey="Lengyel, M" uniqKey="Lengyel M">M Lengyel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sutton, R" uniqKey="Sutton R">R Sutton</name>
</author>
<author>
<name sortKey="Barto, A" uniqKey="Barto A">A Barto</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS Comput Biol</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">ploscomp</journal-id>
<journal-title-group>
<journal-title>PLoS Computational Biology</journal-title>
</journal-title-group>
<issn pub-type="ppub">1553-734X</issn>
<issn pub-type="epub">1553-7358</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">21738457</article-id>
<article-id pub-id-type="pmc">3127804</article-id>
<article-id pub-id-type="publisher-id">PCOMPBIOL-D-11-00013</article-id>
<article-id pub-id-type="doi">10.1371/journal.pcbi.1002080</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Computer Science</subject>
<subj-group>
<subject>Computerized Simulations</subject>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Social and Behavioral Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Reasoning</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Psychophysics</subject>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>How Haptic Size Sensations Improve Distance Perception</article-title>
<alt-title alt-title-type="running-head">How Haptic Size Improves Distance Perception</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Battaglia</surname>
<given-names>Peter W.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kersten</surname>
<given-names>Daniel</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Schrater</surname>
<given-names>Paul R.</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>BCS and CSAIL, MIT, Cambridge, Massachusetts, United States of America</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America</addr-line>
</aff>
<aff id="aff3">
<label>3</label>
<addr-line>Psychology and Computer Science, University of Minnesota, Minneapolis, Minnesota, United States of America</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Körding</surname>
<given-names>Konrad P.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">Northwestern University, United States of America</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>pbatt@mit.edu</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: PWB DK PRS. Performed the experiments: PWB. Analyzed the data: PWB. Contributed reagents/materials/analysis tools: PWB DK PRS. Wrote the paper: PWB DK PRS. </p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<month>6</month>
<year>2011</year>
</pub-date>
<pmc-comment> Fake ppub added to accomodate plos workflow change from 03/2008 and 03/2009 </pmc-comment>
<pub-date pub-type="ppub">
<month>6</month>
<year>2011</year>
</pub-date>
<pub-date pub-type="epub">
<day>30</day>
<month>6</month>
<year>2011</year>
</pub-date>
<volume>7</volume>
<issue>6</issue>
<elocation-id>e1002080</elocation-id>
<history>
<date date-type="received">
<day>25</day>
<month>12</month>
<year>2010</year>
</date>
<date date-type="accepted">
<day>20</day>
<month>4</month>
<year>2011</year>
</date>
</history>
<permissions>
<copyright-statement>Battaglia et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
<copyright-year>2011</copyright-year>
</permissions>
<abstract>
<p>Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual “posterior sampling”. In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information.</p>
</abstract>
<abstract abstract-type="summary">
<title>Author Summary</title>
<p>Perceiving the distance to an object can be difficult because a monocular visual image is influenced by the object's distance
<italic>and</italic>
size, so the object's image size alone cannot uniquely determine the distance. However, because object distance is so important in everyday life, our brains have developed various strategies to overcome this difficulty and enable accurate perceptual distance estimates. A key strategy the brain employs is to use touched size sensations, as well as background information regarding the object's size, to rule out incorrect size/distance combinations; our work studies the brain's computations that underpin this strategy. We modified a sophisticated model that prescribes how humans
<italic>should</italic>
estimate object distance to encompass a broad set of hypotheses about how humans
<italic>do</italic>
estimate distance in actuality. We then used data from a distance perception experiment to select which modified model best accounts for human performance. Our analysis reveals how people use touch sensations and how they bias their distance judgments to conform with true object statistics in the enviroment. Our results provide a comprehensive account of human distance perception and the role of size information, which significantly improves cognitive scientists' understanding of this fundamental, important, and ubiquitous behavior.</p>
</abstract>
<counts>
<page-count count="13"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="" id="s1">
<title>Introduction</title>
<p>The perception of distances by monocular vision is fundamentally ambiguous: an object that is small and near may create the same image as an object that is large and far (
<xref ref-type="fig" rid="pcbi-1002080-g001">Figure 1A</xref>
). More precisely, the monocular image size of the object (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e001.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, visual angle) does not uniquely specify the physical distance (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e002.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), because
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e003.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and the object's physical size (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e004.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, diameter) are confounded,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e005.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Subjectively we are not usually aware of this visual ambiguity because we perceive object distances unambiguously across a variety of conditions – this work examines how humans perform distance disambiguation by studying whether and how haptic size information is applied to these judgments. Despite previous evidence that adults
<xref ref-type="bibr" rid="pcbi.1002080-Ittelson1">[1]</xref>
and infants
<xref ref-type="bibr" rid="pcbi.1002080-Yonas1">[2]</xref>
use object size information, like familiar size, to disambiguate (
<xref ref-type="fig" rid="pcbi-1002080-g001">Figure 1B</xref>
) the otherwise ambiguous visual information, debate exists
<xref ref-type="bibr" rid="pcbi.1002080-Mershon1">[3]</xref>
, summarized by
<xref ref-type="bibr" rid="pcbi.1002080-Yonas1">[2]</xref>
. Recently, Battaglia et al.
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
reported that the brain merges image and haptic sensations in a principled fashion to unambiguously infer distance. Incorporating haptic size information is particularly interesting because it requires sophisticated causal knowledge of the relationship between distance, size, and the multisensory sensations available to the brain to overcome size/distance ambiguity.</p>
<fig id="pcbi-1002080-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002080.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Task, model, and inference.</title>
<p>
<bold>A. No-haptic condition.</bold>
The schematic shows a scene that contains a ball at some distance, and an observer who monocularly views a projected image of the ball (eye on left of image plane). In the absence of size information, the object's distance is ambiguous; e.g. the ball may be small and near (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e006.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), medium-sized and mid-range (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e007.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), large and far (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e008.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), or anywhere in between, but still project to the same
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e009.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The lower-right inset is the no-haptic condition Bayes' net that shows the generative direction (black arrows), and information flow during inference (dotted arrows).
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e010.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e011.jpg" mimetype="image"></inline-graphic>
</inline-formula>
both influence
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e012.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(black arrows), the likelihood of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e013.jpg" mimetype="image"></inline-graphic>
</inline-formula>
given
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e014.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the plot labeled “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e015.jpg" mimetype="image"></inline-graphic>
</inline-formula>
” on the left. Inferring the ball's distance means propagating prior information about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e016.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(“
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e017.jpg" mimetype="image"></inline-graphic>
</inline-formula>
” plot on top-right) and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e018.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to form a posterior over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e019.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(labeled “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e020.jpg" mimetype="image"></inline-graphic>
</inline-formula>
”). Notice that regardless of the true
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e021.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(i.e.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e022.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e023.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e024.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, black vertical lines in posterior plot), the posterior over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e025.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the same, and is often positioned quite far from the true
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e026.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.
<bold>B. Haptic condition.</bold>
The observer monocularly views an image of the scene and touches the ball beforehand to receive haptic size information,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e027.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Though the image only constrains possible
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e028.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values to those consistent with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e029.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, because
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e030.jpg" mimetype="image"></inline-graphic>
</inline-formula>
varies with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e031.jpg" mimetype="image"></inline-graphic>
</inline-formula>
it constrains
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e032.jpg" mimetype="image"></inline-graphic>
</inline-formula>
more and can disambiguate
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e033.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The lower-right inset is the haptic condition Bayes' net that shows the generative direction (black arrows), and the information flow (dotted arrows). The
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e034.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e035.jpg" mimetype="image"></inline-graphic>
</inline-formula>
both influence
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e036.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(black arrows), again the likelihood of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e037.jpg" mimetype="image"></inline-graphic>
</inline-formula>
given
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e038.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the plot labeled “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e039.jpg" mimetype="image"></inline-graphic>
</inline-formula>
” on the left, but now the marginal posterior of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e040.jpg" mimetype="image"></inline-graphic>
</inline-formula>
given
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e041.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(plot labeled “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e042.jpg" mimetype="image"></inline-graphic>
</inline-formula>
”) captures information about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e043.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Inferring the ball's distance means propagating
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e044.jpg" mimetype="image"></inline-graphic>
</inline-formula>
information, prior information about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e045.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e046.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to form a posterior over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e047.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(labeled “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e048.jpg" mimetype="image"></inline-graphic>
</inline-formula>
”). Notice that now different
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e049.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(i.e.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e050.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e051.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e052.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, black vertical lines in
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e053.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) induce different posterior distributions (different curves in
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e054.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), and each is positioned much nearer to the respective true
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e055.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
</caption>
<graphic xlink:href="pcbi.1002080.g001"></graphic>
</fig>
<p>Bayesian models provide the exact machinery needed to capture the size-distance perceptual ambiguity, the knowledge required to interpret noisy sensations, and how noisy sensations should be merged with prior knowledge to draw statistically sound perceptual estimates of object distances. This work uses Bayesian models to explicate, test, and confirm/deny a variety of hypotheses about the role of size information in human distance perception. Our results provide a significantly more comprehensive, quantitative account of the underlying computational processes responsible for incorporating size information into distance perception than any previous report.</p>
<p>We formulated a family of Bayesian perception/action models, whose model structure and parameters encoded different assumptions about observer's internal knowledge and computations. We analyzed Battaglia et al.'s
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
data within this context, and used statistical model-selection methods to infer the most probable model and associated parameters for explaining their data.</p>
<p>By committing to a full probabilistic model of observers' sensation, perception, and decision-making processes, we leveraged Battaglia et al.'s
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
data to uncover properties of: 1) the image and haptic sensory noise, 2) the observer's prior knowledge about size and distance, their causal relationship with the sensations, and how they are applied during perceptual processing, and 3) the decision-making strategy by which observers' perceptual inferences yielded psychophysical measurements. Important elements obscured from Battaglia et al.'s
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
original analyses were revealed: the present findings answer four key questions about how size influences human distance perception (described in
<xref ref-type="sec" rid="s2">Model</xref>
section). Using a full observer model allows us to transcend simplistic debates about whether humans are “optimal vs. sub-optimal” by providing a more textured account of perceptual phenomena that quantifies the sensory quality, what internal knowledge is involved, how they are merged and exploited, and how decisions result. This allows vague questions like “Is perception Bayesian?” to be reformulated into more precise ones like “To what degree does the brain encode uncertainty and apply structured knowledge to perceptual inference?”</p>
<p>Our family of candidate observer models treat the world, observer, and observer's responses as one coherent interrelated physical system, which are represented in the models' structures and parameters using formal probabilistic notation. The fundamental assumptions are that world properties (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e056.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e057.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) generate pieces of sensory evidence, or cues, (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e058.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and the haptic size information
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e059.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), and the observer's perceptual process uses probabilistic (i.e. sensitive to various sources of noise and uncertainty) inference to compute the posterior distribution over the distance given sensory cues,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e060.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e061.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<xref ref-type="fig" rid="pcbi-1002080-g001">Figure 1</xref>
). The literature
<xref ref-type="bibr" rid="pcbi.1002080-Knill1">[5]</xref>
<xref ref-type="bibr" rid="pcbi.1002080-Koerding1">[8]</xref>
reports many similarities between behavior prescribed by optimal Bayesian inference models, and humans' use of sensory cues, prior knowledge, and decision-making for perceptual inference. The perceptual task used by
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
is well-suited to Bayesian modeling because of important effects of uncertainty and especially the use of auxiliary information (in this case,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e062.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) for disambiguating hidden causes (i.e.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e063.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). In fact, disambiguation of hidden causes using indirectly-related data is a key, beneficial feature of Bayesian inference, termed “explaining-away”
<xref ref-type="bibr" rid="pcbi.1002080-Pearl1">[9]</xref>
; we hypothesize that human distance perception in the presence of auxiliary size cues is consistent with probabilistic explaining-away.</p>
<p>Battaglia et al.'s
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
experimental task asked participants to intercept a moving ball, and treated their interception distances as perceptual distance judgments. Specifically, participants intercepted the ball as it moved at some distance, after a brief exposure to the ball that in some cases offered the ability touch the ball and feel its physical size and in other cases did not provide explicit size sensations. Our candidate observer models also make distance judgments using the sensory input available to human participants, so a direct comparison between human and model behaviors is possible.</p>
<p>We derived all our candidate models from a base, ideal observer model (IO) that contains internal knowledge about the distributions of sensory noise that corrupt the sensations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e064.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e065.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, has knowledge about the prior distributions over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e066.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e067.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, the relationship between
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e068.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e069.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e070.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and the relationship between
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e071.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e072.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<xref ref-type="fig" rid="pcbi-1002080-g001">Figure 1</xref>
, lower-right insets, black arrows). In Bayesian parlance these pieces of knowledge fall under the rubric of
<italic>generative knowledge</italic>
, or background information about the data's generative process that can aid in inferring the underlying causes. The IO estimates
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e073.jpg" mimetype="image"></inline-graphic>
</inline-formula>
by computing
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e074.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and selecting the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e075.jpg" mimetype="image"></inline-graphic>
</inline-formula>
that maximizes it (“maximum
<italic>a posteriori</italic>
”, MAP, decision rule). This computation requires merging image-size and haptic cues, as well as prior distance and size knowledge, in a manner Bayes' rule prescribes to yield optimal information about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e076.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<xref ref-type="fig" rid="pcbi-1002080-g001">Figure 1</xref>
's caption illustrates the inference process). We formulated this IO, as well as the other candidate observer models by enumerating all combinations of the following hypothetical questions: 1) Does the observer use the haptic size cue?, 2) Does the observer know the haptic cue's reliability, and integrate the cue appropriately?, 3) Does the observer know the image-size cue's reliability, and integrate the cue appropriately?, 4) Does the observer perform MAP estimation, or rather estimate the distance by averaging a limited number of samples drawn from the posterior? The models were designed to allow standard model-selection methods to decide which hypothetical candidate model, and associated parameters, were best-supported by the experimental human data. Thus we were able to select the most accurate hypothesis, among the field we pre-specified, as the best explanation for how human distance processing uses size information. Moreover, we compared the resultant parameter estimates with measurements reported by other studies, and found they conform with previous findings regarding perception's computational dynamics, which provides independent verification of our conclusions' validity.</p>
<p>Our results indicate humans incorporate haptic size information for distance perception, consistent with Bayesian explaining-away. We also found that all but one participant underestimated the haptic cue's reliability (specifically, they overestimated its sensory noise variance) and integrated the haptic information to a lesser degree than the IO prescribed, similar to the human underuse of auditory information for spatial localization reported by
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia2">[10]</xref>
. We found that participants' priors over size and distance were comparable to the experiment's actual random size and distance parameter distributions, implying participants applied knowledge of probable stimulus parameters in their perceptual processing (possibly learned or assumed during the experiment). Last, the sample-averaging estimation model, as opposed to the MAP-estimator, best-accounted for participants' distance judgments, a finding consistent with a growing body of results from perceptual studies that suggest perceptual judgments result from posterior sampling processes
<xref ref-type="bibr" rid="pcbi.1002080-Sundareswara1">[11]</xref>
<xref ref-type="bibr" rid="pcbi.1002080-Wozny1">[13]</xref>
.</p>
</sec>
<sec sec-type="" id="s2">
<title>Model</title>
<p>The observer models have three components: 1) the
<italic>sensation model</italic>
describes how the distal stimulus determines the proximal stimulus, 2) the
<italic>perception model</italic>
describes how the distal stimulus is inferred from the proximal stimulus, 3) the
<italic>decision-making model</italic>
describes how the inferred distal representation guides action.</p>
<sec id="s2a">
<title>Sensation model</title>
<p>The scene properties relevant for object distance perception are the object's physical distance and physical size; the relevant sensory cues they generate are visual angle and felt (“haptic”) size. As noted in the
<xref ref-type="sec" rid="s1">Introduction</xref>
, visual angle is proportional to the ratio of size and distance; so, taking the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e077.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of each of these variables transforms this relationship into a linear sum (below). Our sensation model uses this
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e078.jpg" mimetype="image"></inline-graphic>
</inline-formula>
-transformed representation for two reasons: 1) Weber-Fechner phenomena support a noise model in which the standard deviation linearly scales with signal magnitude (which can be accomplished with independent noise in log-coordinates), and 2) this
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e079.jpg" mimetype="image"></inline-graphic>
</inline-formula>
-linear approximation is analytically tractable, as we will show. So we assume a linear Gaussian model, meaning the scene properties are
<italic>a priori</italic>
Gaussian distributed, and the sensory and motor noise are additive, zero-mean Gaussian, and the sensory generative process is linear, in the log domain.</p>
<p>Log-distance, log-size, log-visual angle, and log-haptic size are represented as:
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e080.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e081.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e082.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e083.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, respectively. The relationship between
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e084.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e085.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e086.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(by “small angle approximation” to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e087.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) is:
<disp-formula>
<graphic xlink:href="pcbi.1002080.e088"></graphic>
</disp-formula>
and between
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e089.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e090.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is:
<disp-formula>
<graphic xlink:href="pcbi.1002080.e091"></graphic>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e092.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e093.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represent image-size and haptic sensory noise with standard deviations (SDs)
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e094.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e095.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, respectively. The
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e096.jpg" mimetype="image"></inline-graphic>
</inline-formula>
notation indicates that the parameter represents a
<italic>property</italic>
of the scene; this is distinct from the observer's
<italic>knowledge</italic>
about the scene, defined in the next section with no tilde.</p>
<p>It follows that the distribution of sensory cues conditioned on the scene properties are:
<disp-formula>
<graphic xlink:href="pcbi.1002080.e097"></graphic>
<label>(1)</label>
</disp-formula>
<disp-formula>
<graphic xlink:href="pcbi.1002080.e098"></graphic>
<label>(2)</label>
</disp-formula>
 We assume observers' internal prior probabilities over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e099.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e100.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are:
<disp-formula>
<graphic xlink:href="pcbi.1002080.e101"></graphic>
</disp-formula>
</p>
</sec>
<sec id="s2b">
<title>Perceptual model</title>
<p>Battaglia
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia3">[14]</xref>
derives model observers for perceptual inference in linear Gaussian contexts under a variety of assumptions – we co-opt the “explaining-away” derivations (Sec. 3.4 in
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia3">[14]</xref>
) for the current size/distance perception context. All model observers are assumed to use their knowledge of the world, i.e. the sensory noise (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e102.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e103.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and prior distributions (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e104.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e105.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e106.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e107.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), to compute beliefs about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e108.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. These beliefs are represented as the posterior distribution,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e109.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(which is Gaussian):
<disp-formula>
<graphic xlink:href="pcbi.1002080.e110"></graphic>
<label>(3)</label>
</disp-formula>
where,
<disp-formula>
<graphic xlink:href="pcbi.1002080.e111"></graphic>
<label>(4)</label>
</disp-formula>
</p>
<p>For those familiar with “standard” cue combination, Eqs. 3 and 4 are similar to the “optimal cue combination” formulae in
<xref ref-type="bibr" rid="pcbi.1002080-Clark1">[15]</xref>
, and in fact by looking closely at the Bayes' net in the lower right of
<xref ref-type="fig" rid="pcbi-1002080-g001">Fig. 1B</xref>
, one can see that the subgraph composed of variables
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e112.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e113.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e114.jpg" mimetype="image"></inline-graphic>
</inline-formula>
represents the standard two-cue “cue combination” situation. However, our present situation is distinct from
<xref ref-type="bibr" rid="pcbi.1002080-Clark1">[15]</xref>
because we focus on data fusion in conditions where one cue (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e115.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) is only indirectly related to the desired property (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e116.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) by its ability to disambiguate another cue (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e117.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). The intuition for the weights in Eq. 4 is as follows. Because
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e118.jpg" mimetype="image"></inline-graphic>
</inline-formula>
provides information about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e119.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to improve inference of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e120.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, the numerator of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e121.jpg" mimetype="image"></inline-graphic>
</inline-formula>
assigns sensory cue
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e122.jpg" mimetype="image"></inline-graphic>
</inline-formula>
more influence when prior knowledge of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e123.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e124.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are weaker (higher
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e125.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e126.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Similarly,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e127.jpg" mimetype="image"></inline-graphic>
</inline-formula>
's numerator dictates that
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e128.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is more influential when information about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e129.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is weaker (higher
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e130.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Interpreting
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e131.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is less straightfoward, but essentially holds that when information about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e132.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is poor, because both the prior over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e133.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and sensory cue
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e134.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are weak (higher
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e135.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e136.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), then
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e137.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is more exclusively influential for inferring
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e138.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, whereas if either prior knowledge about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e139.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or sensory cue
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e140.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are strong,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e141.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and that
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e142.jpg" mimetype="image"></inline-graphic>
</inline-formula>
information jointly guide inference of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e143.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Last,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e144.jpg" mimetype="image"></inline-graphic>
</inline-formula>
's numerator assigns stronger influence to prior knowledge of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e145.jpg" mimetype="image"></inline-graphic>
</inline-formula>
only when the sensory cues and prior knowledge of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e146.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are weak.</p>
<p>Human observers who
<italic>do</italic>
use
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e147.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for distance perception are modeled above by Eq. 4. The hypothesis that observers
<italic>do not</italic>
use
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e148.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, either because
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e149.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is unavailable or because they are not capable, is formulated:
<disp-formula>
<graphic xlink:href="pcbi.1002080.e150"></graphic>
<label>(5)</label>
</disp-formula>
where,
<disp-formula>
<graphic xlink:href="pcbi.1002080.e151"></graphic>
<label>(6)</label>
</disp-formula>
</p>
<p>Eq. 5 is algebraically equivalent to taking
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e152.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in the formulation in Eq. 4. Whether humans do (Eq. 3) or do not (Eq. 5) use
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e153.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to make distance judgments is the first of our hypothesis questions (see
<xref ref-type="table" rid="pcbi-1002080-t001">Table 1</xref>
). Also, whether humans know the true sensory noise magnitude i.e. whether they use
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e154.jpg" mimetype="image"></inline-graphic>
</inline-formula>
vs.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e155.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and/or
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e156.jpg" mimetype="image"></inline-graphic>
</inline-formula>
vs.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e157.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, are the second and third of our hypothesis questions (
<xref ref-type="table" rid="pcbi-1002080-t001">Table 1</xref>
).</p>
<table-wrap id="pcbi-1002080-t001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002080.t001</object-id>
<label>Table 1</label>
<caption>
<title>Candidate model list.</title>
</caption>
<alternatives>
<graphic id="pcbi-1002080-t001-1" xlink:href="pcbi.1002080.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">
<xref ref-type="sec" rid="s2">Model</xref>
#</td>
<td align="left" rowspan="1" colspan="1">Q I</td>
<td align="left" rowspan="1" colspan="1">Q II</td>
<td align="left" rowspan="1" colspan="1">Q III</td>
<td align="left" rowspan="1" colspan="1">Q IV</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e158.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">n/a</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e159.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e160.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e161.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">n/a</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e162.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">MAP</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e163.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">n/a</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e164.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e165.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e166.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">n/a</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e167.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">MAP</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e168.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e169.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e170.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e171.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e172.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e173.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e174.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">MAP</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e175.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e176.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e177.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e178.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e179.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e180.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e181.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">MAP</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">9</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e182.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e183.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e184.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e185.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e186.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e187.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e188.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">MAP</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">11</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e189.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e190.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e191.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e192.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e193.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e194.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e195.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">MAP</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>The candidate models encode possible answers to the four questions as follows. Q I: “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e196.jpg" mimetype="image"></inline-graphic>
</inline-formula>
”, haptic information is never integrated vs. “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e197.jpg" mimetype="image"></inline-graphic>
</inline-formula>
”, haptic information is integrated when available. Q II (which is only applicable to the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e198.jpg" mimetype="image"></inline-graphic>
</inline-formula>
answer for Q I): “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e199.jpg" mimetype="image"></inline-graphic>
</inline-formula>
”, internal knowledge of haptic reliability is incorrect vs. “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e200.jpg" mimetype="image"></inline-graphic>
</inline-formula>
”, internal knowledge of haptic reliability is correct. Q III: “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e201.jpg" mimetype="image"></inline-graphic>
</inline-formula>
”, internal knowledge of image-size reliability is incorrect vs. “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e202.jpg" mimetype="image"></inline-graphic>
</inline-formula>
”, internal knowledge of image-size reliability is correct. Q IV: “
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e203.jpg" mimetype="image"></inline-graphic>
</inline-formula>
”, posterior samples are averaged to form distance judgments vs. “MAP”, MAP estimates are used to form distance judgments.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="s2c">
<title>Decision-making model</title>
<p>The model observer uses beliefs about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e204.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to select a position at which to intercept the moving ball. We assume that participants attempt to minimize the difference between their judged distance and the true distance, which for Gaussian distributions may equivalently correspond to minimizing a MAP, mean-squared, or symmetric Heaviside loss functions. However accessing their perceptually-inferred information about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e205.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is not necessarily trivial: we consider that they may select the maximum probability
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e206.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, i.e.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e207.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(or
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e208.jpg" mimetype="image"></inline-graphic>
</inline-formula>
)), as their judgment of distance, or instead draw a number,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e209.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, of independent samples from
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e210.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e211.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and compute their sample mean as a
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e212.jpg" mimetype="image"></inline-graphic>
</inline-formula>
judgment, is the fourth (and last) of our hypothesis questions (
<xref ref-type="table" rid="pcbi-1002080-t001">Table 1</xref>
). These distinct models may imply different neural representations for posterior beliefs about distance, which we address in the
<xref ref-type="sec" rid="s4">Discussion</xref>
.</p>
<p>Additionally our models all include an element of motor noise, the small degree of error between judged
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e213.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and the experimentally-measured
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e214.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, due to motor imprecision when performing an interception. For consistency with known parameters of motor control, we selected an additive, Gaussian motor noise term
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e215.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, that was added to the distance judgment to form
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e216.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
</sec>
<sec id="s2d">
<title>Full observer models</title>
<p>We combine the sensation, perception, and decision-making models described above to define a set of coherent model observers that input sensations, combine them with internal knowledge to form beliefs about distance, and form decisions that are output as interception responses in the experimental task.</p>
<p>By varying the models' structure and parameters we encoded the four hypothesis questions in the
<xref ref-type="sec" rid="s1">Introduction</xref>
(subsequently referred to as “Q I, II, II, IV”) to form the candidate observer models (
<xref ref-type="table" rid="pcbi-1002080-t001">Table 1</xref>
):</p>
<list list-type="roman-upper">
<list-item>
<p>I) Does the brain integrate haptic size information for distance perception (i.e. do they use
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e217.jpg" mimetype="image"></inline-graphic>
</inline-formula>
when possible, or
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e218.jpg" mimetype="image"></inline-graphic>
</inline-formula>
exclusively)?</p>
</list-item>
<list-item>
<p>II) Does the brain have accurate knowledge of the haptic cue's noise magnitude and incorporate
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e219.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in proportion to its reliability? (i.e. do they use
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e220.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e221.jpg" mimetype="image"></inline-graphic>
</inline-formula>
)?</p>
</list-item>
<list-item>
<p>III) Does the brain have accurate knowledge of the visual image-size cue's noise magnitude and incorporate
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e222.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in proportion to its reliability (i.e. do they use
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e223.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e224.jpg" mimetype="image"></inline-graphic>
</inline-formula>
)?</p>
</list-item>
<list-item>
<p>IV) Does the brain select MAP distance estimates, or average
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e225.jpg" mimetype="image"></inline-graphic>
</inline-formula>
samples from
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e226.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e227.jpg" mimetype="image"></inline-graphic>
</inline-formula>
?</p>
</list-item>
</list>
<p>In total 12 distinct candidate models spanned the possible combinations of the four questions (the reason the total is 12, instead of 16, is because for candidate models that do not include the use of haptic information [Q I], the question of whether the observer knows the haptic cue noise magnitude or not [Q II] is inconsequential and those models are redundant).</p>
</sec>
<sec id="s2e">
<title>Human data methods</title>
<sec id="s2e1">
<title>Ethics statement</title>
<p>All participants gave informed consent in accordance with the University of Minnesota's IRB standards.</p>
</sec>
<sec id="s2e2">
<title>Stimuli and task</title>
<p>Participants sat in a virtual reality workbench capable of presenting monocular visual (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e228.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and haptic (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e229.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) stimuli (
<xref ref-type="fig" rid="pcbi-1002080-g002">Figure 2</xref>
shows a stimulus screenshot). They held a small, stylus probe connected to a robot arm that presented forces and recorded their hand movements; the hand/stylus position was graphically depicted in the visual scene as a small 3 mm
<italic>stylus sphere</italic>
(see
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
for full experimental details). They performed 1280 trials across 4 days, where each day was composed of 4 blocks of 80 trials; the first day of trials was treated as training, and excluded from further analysis, resulting in 960 total experimental trials. There were two types of trials,
<italic>no-haptic</italic>
and
<italic>haptic</italic>
, randomly interleaved in equal proportions (480+480 = 960), which determined the type of exploration phase (described next).</p>
<fig id="pcbi-1002080-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002080.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Experimental stimulus screenshot.</title>
<p>The overlaid lines were not visible to the experimental participant, but depict various task elements; they are not drawn from the participant's viewpoint, but rather from a viewpoint elevated above the observer's head so they can be distinguished from each other the participant's viewpoint intersected the constraint line). They are: the constraint line (yellow dotted line), the ball's true movement path (green solid arrow), and ambiguous movement paths in the no-haptic condition (blue dotted arrows). The point at which the green arrow intersects the constraint line is the crossing distance. The points at which the blue arrows intersect the constraint line represent distance misjudgments. The participant's hand position was indicated by a 3 mm diameter blue sphere.</p>
</caption>
<graphic xlink:href="pcbi.1002080.g002"></graphic>
</fig>
<p>Each trial was divided into two phases:
<italic>exploration</italic>
and
<italic>interception</italic>
. During the exploration phase, a ball with random diameter between 14 and 42 mm appeared at a random position in the virtual scene between 300 and 640 mm distance, at an angle between −8.5 and 8.5 degrees visual angle on the horizontal plane that intersected the eyes, and remained still. On no-haptic trials, participants viewed the ball but were not able to touch it, on haptic trials they touched the ball with the stylus and received haptic force feedback consistent with the ball's physical diameter
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
. Once they were satisfied with the exploration they depressed a mouse button to complete the exploration phase and begin the interception phase; the exploration phase was forced to last a minimum of 1 second, and additionally in haptic trials participants had to touch the ball to end the exploration phase.</p>
<p>Both trial types' interception phases proceeded identically. First the robot arm moved the hand to a position on the right side of the scene, and began to impose a continuous constraining force that limited the hand's position to a fixed ray that began at their eye and extended toward the ball's (future) movement path. Simultaneously the ball was repositioned at a random position on the left of the scene at a distance between 1000 and 1500 mm and an angle between −17 and −5 degrees on the horizontal plane. This rendered any distance information gained during the exploration phase irrelevant (and thus non-useful for subsequent distance judgments), but the ball's size was kept the same. Once ready, the participant again depressed the mouse button and the ball began to move toward the constraining line with a random speed between 250 and 375 mm/s. The ball's trajectory crossed the constraining line at a random point between 300 and 640 mm from the participant's eye (termed the
<italic>crossing distance</italic>
) and continued out of the scene; the total travel time was between 1.3 and 4.8 seconds. Participants were instructed to place the stylus' tip at the crossing distance, and we recorded this position at the time the ball crossed the constraining line as the
<italic>judged distance</italic>
, which was used for the subsequent data analysis as an indication of the participant's perceived distance. Participants received haptic feedback regarding their accuracy: if the judged distance was within 32 mm of the crossing distance the stylus received an impulse consistent with a momentus collision and the finger visual stylus sphere pulsed green momentarily, otherwise no collision was felt and the stylus sphere pulsed red. At this point the trial ended and a new trial began immediately.</p>
</sec>
<sec id="s2e3">
<title>Participants</title>
<p>6 university students, ages 21 to 30, participated in the study. All had normal or corrected-to-normal vision, and normal motor abilities. 5 participants were naive to the purpose of the study, 1 was an author; the author's data was statistically indistinguishable from the others'. All participants gave informed consent in accordance with the University of Minnesota's IRB standards.</p>
</sec>
</sec>
<sec id="s2f">
<title>Analysis</title>
<p>First, we describe how the model observers predict responses in the experimental interception task and illustrate responses produced by each model. Second, we describe how the model's parameters were inferred given each participants' response data. Third, we show how we computed the human data likelihood under each model and how we quantitatively compare them to determine which model provides the best account of the human data.</p>
<sec id="s2f1">
<title>Generating model predictions</title>
<p>We can simulate the sensory model by fixing
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e230.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e231.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and sampling predicted
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e232.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e233.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values. Beliefs about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e234.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are represented by Gaussian posterior distributions in our model (Eqs. 4 and 5), whose means and variances depend on the assumptions encoded by Q I-III.
<xref ref-type="fig" rid="pcbi-1002080-g003">Figure 3</xref>
shows posteriors over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e235.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(contours) and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e236.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(density functions on bottom) given
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e237.jpg" mimetype="image"></inline-graphic>
</inline-formula>
when haptic information is not integrated (red) and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e238.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, when haptic information is integrated (blue). When prior information is weak (top row), the sensory cues dominate, and the posterior variance is high; when no haptic information is incorporated the posterior is ambiguous, many
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e239.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values are consistent with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e240.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. When prior information is strong (bottom row), prior bias is introduced (ie. black and blue posterior mean lines separate) but on average
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e241.jpg" mimetype="image"></inline-graphic>
</inline-formula>
judgments are more accurate because sensory noise is mitigated by prior knowledge, and the posterior distribution's variance shrinks. Weak priors are approximated by Gaussians with very high variance.</p>
<fig id="pcbi-1002080-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002080.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Perceptual model.</title>
<p>Posterior distributions over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e242.jpg" mimetype="image"></inline-graphic>
</inline-formula>
given sensory input
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e243.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are depicted. The true values of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e244.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e245.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are 6.0 and 3.0, respectively. The red curves are when the haptic information is not incorporated, the blue curves are when it is incorporated. The curves on the bottom are the joint distributions marginalized over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e246.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, to yield marginal posteriors over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e247.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The dotted vertical lines are the posterior means. The black dot is the true
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e248.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values, the purple dot is the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e249.jpg" mimetype="image"></inline-graphic>
</inline-formula>
prior mean. The top row is an observer who uses weak priors (high
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e250.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e251.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and the bottom row is an observer who uses accurate priors (lower
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e252.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e253.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). The left column is an observer with accurate knowledge of the haptic noise (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e254.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and the right column is an observer with inaccurate knowledge (overestimated) of haptic noise (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e255.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Notice that by using haptic information, the mean of the posterior becomes more accurate and the variance decreases. When prior information used, bias is introduced in that the means become less accurate, however the posterior variance decreases. Also notice that when the haptic cue noise is inaccurate the observer's posterior shifts toward the no-haptic integration observer.</p>
</caption>
<graphic xlink:href="pcbi.1002080.g003"></graphic>
</fig>
<p>Each model observer's responses can be predicted given input
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e256.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e257.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values. Q IV distinguishes between decision-making strategies.
<xref ref-type="fig" rid="pcbi-1002080-g004">Figure 4</xref>
depicts an empirical distribution of model observers' responses, under MAP estimation as well as sample-averaging estimation (with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e258.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e259.jpg" mimetype="image"></inline-graphic>
</inline-formula>
).</p>
<fig id="pcbi-1002080-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002080.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Decision-making model.</title>
<p>The left column shows model observers' response distributions given input
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e260.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e261.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values 6.0 and 3.0, respectively. The top row is when accurate haptic noise knowledge is used, the bottom row is when inaccurate haptic noise is used. The solid vertical lines are the true values of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e262.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The solid distribution is an observer that uses the MAP estimate to make a distance judgment, the dashed distribution is an observer that draws
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e263.jpg" mimetype="image"></inline-graphic>
</inline-formula>
posterior
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e264.jpg" mimetype="image"></inline-graphic>
</inline-formula>
samples and averages their values, the dotted distribution is an observer that draws
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e265.jpg" mimetype="image"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e266.jpg" mimetype="image"></inline-graphic>
</inline-formula>
sample. The right column shows the corresponding SDs of the distributions in the left column. Notice that the sampling observers have less precise response distributions than the MAP-responder, and averaging over fewer samples yields less precise responses.</p>
</caption>
<graphic xlink:href="pcbi.1002080.g004"></graphic>
</fig>
</sec>
<sec id="s2f2">
<title>Posterior over model parameters</title>
<p>To test each model's account of the human data, our analysis inferred each model's parameters,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e267.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, given the participants' experimental responses,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e268.jpg" mimetype="image"></inline-graphic>
</inline-formula>
; the posterior distribution is
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e269.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. (Note,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e270.jpg" mimetype="image"></inline-graphic>
</inline-formula>
varies depending on the model, for instance the MAP decision-maker does not have a
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e271.jpg" mimetype="image"></inline-graphic>
</inline-formula>
term, the observer that does not use haptic information does not have
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e272.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e273.jpg" mimetype="image"></inline-graphic>
</inline-formula>
terms, etc.) The likelihood functions were straightforward to compute: because we defined our model observers' entire sensation-perception-decision sequence as a probabilistic generative process, we could compute the participants' response likelihoods given the input stimuli,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e274.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e275.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, for each model.</p>
<p>We constructed the model observers' response likelihood functions by first considering the observers' inferences about
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e276.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which are summarized by the model's posterior parameters,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e277.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e278.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. So,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e279.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e280.jpg" mimetype="image"></inline-graphic>
</inline-formula>
were treated as random variables with distributions in Eqs. 1 and 2, and the likelihood of posterior means given
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e281.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e282.jpg" mimetype="image"></inline-graphic>
</inline-formula>
when haptic size information is used is
<disp-formula>
<graphic xlink:href="pcbi.1002080.e283"></graphic>
</disp-formula>
with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e284.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e285.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e286.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e287.jpg" mimetype="image"></inline-graphic>
</inline-formula>
from Eq. 4. The likelihood of posterior means when haptic information is not used is
<disp-formula>
<graphic xlink:href="pcbi.1002080.e288"></graphic>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e289.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e290.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e291.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are from Eq. 6.</p>
<p>Model observers'
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e292.jpg" mimetype="image"></inline-graphic>
</inline-formula>
were based on their posterior distributions: for MAP, only
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e293.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is needed, but for sample-averaging,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e294.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is also involved. And, motor noise
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e295.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was always added.</p>
<p>A MAP observer's
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e296.jpg" mimetype="image"></inline-graphic>
</inline-formula>
likelihood is,
<disp-formula>
<graphic xlink:href="pcbi.1002080.e297"></graphic>
</disp-formula>
And a sample-averaging observer's likelihood is,
<disp-formula>
<graphic xlink:href="pcbi.1002080.e298"></graphic>
</disp-formula>
where
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e299.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the number of samples that were averaged.</p>
<p>The prior over
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e300.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e301.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, was chosen to be uniformative, we assumed uniform prior distributions over a very large range of possible parameter values.</p>
</sec>
<sec id="s2f3">
<title>Comparing models with humans</title>
<p>For each model, for each participant, we wished to find the most probable parameters given the data we measured. However, because the models have numerous parameters and are nonlinear, optimizing for the parameters is difficult. Also, knowing the
<italic>posterior distribution</italic>
of parameter values is preferred to optimizing the parameters, because optimizing is subject to overfitting, while the posterior distribution implicitly captures the quality of the fit. We used Markov Chain Monte Carlo (MCMC)
<xref ref-type="bibr" rid="pcbi.1002080-Gelman1">[16]</xref>
to approximate samples from the posterior, then within each participant compared how well each model explains the data by computing a standard model “goodness” metric called DIC
<xref ref-type="bibr" rid="pcbi.1002080-Gelman1">[16]</xref>
. DIC rewards predictive power and penalizes model complexity; lower DIC scores mean better fits. DIC is similar to related model goodness metrics, like Akaike Information Criterion and Bayesian Information Criterion, but is especially suited to MCMC output. So, for each participant the model with the lowest DIC score provided the best account of the data, in terms of explanatory power
<italic>and</italic>
parsimony.</p>
<p>Battaglia et al.'s
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
interception experiment collected each participants'
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e302.jpg" mimetype="image"></inline-graphic>
</inline-formula>
measurements given
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e303.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e304.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in two conditions, no-haptic and haptic. In the no-haptic trials, all models used Eq. 5 to draw perceptual inferences. In the haptic trials, models that integrate haptic information (Q I) used Eq. 3 while models that did not integrate haptic information again used Eq. 5. Different trials were treated as independent, and we computed the total experimental likelihood,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e305.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, for each model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e306.jpg" mimetype="image"></inline-graphic>
</inline-formula>
as the product of each trial's likelihood,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e307.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e308.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the trial number.</p>
<p>The likelihood (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e309.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) and prior (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e310.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) terms allowed us to draw MCMC-simulated
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e311.jpg" mimetype="image"></inline-graphic>
</inline-formula>
parameter samples from the posterior
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e312.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(Metropolis-Hastings specifically). For each model,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e313.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, for each participant, we drew a set of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e314.jpg" mimetype="image"></inline-graphic>
</inline-formula>
simulated MCMC parameter samples,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e315.jpg" mimetype="image"></inline-graphic>
</inline-formula>
; we ran 360 parallel chains with 15,000 “burn-in”, throwaway samples followed by 6,000 stored, valid samples (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e316.jpg" mimetype="image"></inline-graphic>
</inline-formula>
million valid samples).</p>
<p>DIC scores are based on the
<italic>deviance</italic>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e317.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, of the MCMC posterior model parameter samples,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e318.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. DIC sums two terms, the expected deviance
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e319.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and the model complexity
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e320.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, resulting in
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e321.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Here
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e322.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is a “good” parameter estimate for the model, usually computed as the mean, median, or other central tendency statistic, of the set
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e323.jpg" mimetype="image"></inline-graphic>
</inline-formula>
; we used a robust mean to compute
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e324.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which eliminated outlier parameter values. Significance metrics with respect to DIC scores, in the traditional frequentist sense, has not been exhaustively studied, however DIC is a Bayesian analog to Akaike Information Criterion (AIC), and
<xref ref-type="bibr" rid="pcbi.1002080-Spiegelhalter1">[17]</xref>
suggest that models with AIC/DIC greater than 3–7 above the “best” model are “considerably less support[ed]”. We chose to report DIC differences greater than 10 as significant, and 15 as “highly significant”. We computed each model's DIC score separately for each participant, to quantitatively select the best explanation of the participant's pattern of responses.</p>
</sec>
</sec>
</sec>
<sec sec-type="" id="s3">
<title>Results</title>
<sec id="s3a">
<title>Human performance</title>
<p>The central result of our study is quantitative selection of the model that best explains the data, which we determine by comparing the models' DIC scores, to answer the four hypothetical questions posed above.
<xref ref-type="fig" rid="pcbi-1002080-g005">Figure 5A</xref>
shows raw DIC scores, and
<xref ref-type="fig" rid="pcbi-1002080-g005">Figure 5B</xref>
shows the difference between the best model's DIC (indicated by circle on x-axis) and the other models' DIC scores. We defined DIC significance as described in the previous subsection: models whose DIC differed by greater than 10 were deemed “sigificantly” different (dashed horizontal line and * in
<xref ref-type="fig" rid="pcbi-1002080-g005">Figure 5B</xref>
) and greater than 15 deemed “highly significantly different” (solid horizontal line and ** in
<xref ref-type="fig" rid="pcbi-1002080-g005">Figure 5B</xref>
); this is a conservative modification of the criteria mentioned in
<xref ref-type="bibr" rid="pcbi.1002080-Spiegelhalter1">[17]</xref>
.</p>
<fig id="pcbi-1002080-g005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002080.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Models' DICs.</title>
<p>
<bold>A.</bold>
DIC scores for each sample-averaging model for each participant (lower is better); significance (*) is defined as 10 DIC difference, high significance (**) is 15 DIC difference. Each cluster of 6 bars are models' 1, 3, 5, 7, 9, 11 DIC; the 6 clusters of bars are the 6 participants.
<bold>B.</bold>
Differences in DIC between the best model (indicated by circle on x-axis) and the other models for that participant (higher values mean closer DIC score to the best model). The reason only the odd models, which correspond to the sample-averaging decision-making procedures, are shown is because their DICs are substantially better than their MAP counterparts.</p>
</caption>
<graphic xlink:href="pcbi.1002080.g005"></graphic>
</fig>
<p>We found that all participants incorporate haptic size information to make their distance judgments (Q I). Also, we found 5 of 6 participants misestimated their haptic size noise and thus incorporated the haptic information less than optimally prescribed, while one participant applied the haptic cue in proportion to its reliability (Q II); the following section addresses the nature of the misestimation. All participants incorporated the visual image-size cue optimally, in accordance with its noise magnitude (Q III). All participants used a sample-averaging strategy over MAP decision-making (Q IV). With respect to Q IV, the DIC scores were always worse for the MAP model versus the sample-averaging model, by an average DIC difference of 129 (
<xref ref-type="fig" rid="pcbi-1002080-g005">Figure 5</xref>
), so we exclusively focus on the sample-averaging models (odd numbers) for the remaining discussion.
<xref ref-type="fig" rid="pcbi-1002080-g005">Figure 5</xref>
depicts each participant's DIC scores for each sample-averaging model, the left graph shows the absolute DIC values and the right graph shows the differences between best model DICs and the other models' DICs. Participant 6 was an author. Participant 3's DIC differences between Model 7 and Models 5 and 11 was not significant under our conservative criteria, however Model 7 was still better by DICs of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e325.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e326.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, respectively, which is considered marginally significant under typical uses of AIC/DIC
<xref ref-type="bibr" rid="pcbi.1002080-Spiegelhalter1">[17]</xref>
.</p>
<p>Participant 5, the only participant whose DIC favored the hypothesis that the haptic noise magnitude was correctly known (Q II), had the worst DIC scores across participants, as well as substantially different parameter value estimates from the other participants (see next paragraphs). Upon closer inspection of participant 5's data, it was qualitatively the noisiest: in Battaglia et al.'s
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
simple regression analysis of this data their statistical analysis determined participant 5's data was so significantly different from the other participants' that it ought to be excluded as an outlier. The reason we included it in the current analysis was to determine whether there was still some patterns the previous analysis had not detected. Though the parameters still yield meaningful values, because of the major differences between raw DIC scores, the DIC-favored model, the parameter value estimates, and the general noisiness of the response data, we strongly suspect this participant either was not focusing on performing this task, was randomly selecting answers on a large fraction of the trials, and in general should be distinguished in further analysis due to these aberrations: so, we report participant 5's parameter estimates separately from the other “inlier” participants.</p>
<p>
<xref ref-type="fig" rid="pcbi-1002080-g006">Figure 6</xref>
shows Participant 1's model-predicted
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e327.jpg" mimetype="image"></inline-graphic>
</inline-formula>
compared against the actual
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e328.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values, for the best model, 7, as well as several that differ by one assumption (
<xref ref-type="table" rid="pcbi-1002080-t001">Table 1</xref>
). The spread in the dots is due to sensory noise and the random posterior sampling process, how neatly the actual data falls within the ranges predicted by a particular model (black error bars) is indicative of the model's explanatory quality. Notice the pattern of more varied no-haptic vs. haptic
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e329.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in Models 7, 11, 5, a direct prediction of the sampling models over MAP. Though MAP decisions incur more bias in the no-haptic condition, they actually have less trial-to-trial variance than in the haptic condition. This is due to the fact that the prior does not vary between trials, while the more informative haptic cue does.</p>
<fig id="pcbi-1002080-g006" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002080.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Effect of model on predicted vs. actual
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e330.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</title>
<p>Each plot shows the predicted
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e331.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(x-axis) versus the actual
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e332.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(y-axis) across several different models for Participant 1. The black diagonal line represents perfect correspondence between mean predicted and actual
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e333.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values; the black error bars represent the 95% confidence interval of the model's predicted
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e334.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The colored dots represent all the actual measured
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e335.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values; the top row represents no-haptic trials (red, labeled
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e336.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), the bottom row represents haptic trials (blue, labeled
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e337.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Each column is a different model (numbered along top row, see
<xref ref-type="table" rid="pcbi-1002080-t001">Table 1</xref>
), all predicted
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e338.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are based on the inferred, MCMC-expected parameters for that model. Model 7 (
<italic>column 1</italic>
) is the best model for the inlier participants, and the others are variations of Model 7 with one difference: Model 3 (
<italic>column 2</italic>
) does not use haptic cues; Model 10 (
<italic>column 3</italic>
) uses MAP estimation instead of sampling; Model 11 (
<italic>column 4</italic>
) describes an observer that knows the haptic noise accurately; Model 5 (
<italic>column 5</italic>
) describes an observer that uses inaccurate knowledge of the image noise. When comparing Model 7 to the worse-fit models, consider the correspondences between predicted and actual
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e339.jpg" mimetype="image"></inline-graphic>
</inline-formula>
means and variances, ie. how neatly the actual data falls within the predicted bounds. Also, for Model 3 note that the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e340.jpg" mimetype="image"></inline-graphic>
</inline-formula>
predictions are not as constrained as Model 7. And, notice Model 10 cannot jointly predict the higher variance in the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e341.jpg" mimetype="image"></inline-graphic>
</inline-formula>
actual
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e342.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and the lower variance in the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e343.jpg" mimetype="image"></inline-graphic>
</inline-formula>
actual
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e344.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Models 11 and 5 have predictive accuracy nearer that of Model 7, but still have worse fits as summarized by the DIC scores (
<xref ref-type="fig" rid="pcbi-1002080-g005">Figure 5</xref>
). (Slight differences between Models 7 and 3's
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e345.jpg" mimetype="image"></inline-graphic>
</inline-formula>
predictions are due to stochasticity in the MCMC sampling procedure.)</p>
</caption>
<graphic xlink:href="pcbi.1002080.g006"></graphic>
</fig>
<p>A possible concern is that participants learned to use the haptic cue during the course of the experiment, and that the weak DIC scores of Models 1–4 in comparison to Models 5–12 actually reflect the effects of associative learning rather than knowledge the participants brought into the experiment. We evaluated this possibility by performing the same DIC analysis on data from only the first day to test whether Models 5–12 were still favored over their 1–4 counterparts. The results unequivocally confirm the results on the data from the final 3 days above: for every participant, the DIC analysis across the models shows that the no-haptic models (1–4) have worse DIC scores than their haptic model counterparts (5–12). The best no-haptic models' DICs are below the best haptic models' DICs by margins of {
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e346.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e347.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e348.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e349.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e350.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e351.jpg" mimetype="image"></inline-graphic>
</inline-formula>
} for Participants 1 through 6, respectively. In fact, removing the sampling models, even the no-haptic models (1 and 3) with the best DIC scores still have worse scores than the haptic models (5, 7, 9, and 11) with the worst DIC scores. This firmly supports conclusion that the haptic cue is used even on the first day of trials.</p>
<p>Though it might seem that given the 6 to 10 “free” parameters in our general observer model, we could “fit” any data, we are actually inferring the best parameters and using the posterior's expected values rather than the most probable a posteriori parameters. Moreover DIC acknowledges the possibility for overfitting and counters it by penalizing overfits through the complexity term, thus affirming that the chosen model's structure and parameters are accurate and robust explanations of the humans' judgments. Moreover because we encoded different hypotheses within the models we could clearly distinguish those hypotheses best-supported by the data. Lastly, despite the possibility that we
<italic>could</italic>
fit a variety of data, the remainder of this section shows that the individual inferred parameter values are consistent with known perceptual parameters measured in other studies.</p>
</sec>
<sec id="s3b">
<title>Inferred model parameters</title>
<p>A secondary result of this work, beyond providing answers to the 4 hypothetical questions, is that the inferred parameter values (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e352.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) our analysis yielded can be meaningfully interpreted. Though there is no guarantee that the inferred parameters are unique, they offer an indication of what the analysis finds probable. All reported parameters are MCMC expections, from which we compute means
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e353.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SEs across participants and report the values in log coordinates. First, we present the SDs in terms of Weber fractions for the sensory noise, with discrimination thresholds corresponding to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e354.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>The image-size noise SD,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e355.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and assumed noise SD,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e356.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, were coupled in the best-fit models (7 and 11) for all participants. Their values correspond to Weber fractions of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e357.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e358.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e359.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SE of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e360.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) for the inlier participants, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e361.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for participant 5. This is comparable to the Weber fractions of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e362.jpg" mimetype="image"></inline-graphic>
</inline-formula>
measured in humans by
<xref ref-type="bibr" rid="pcbi.1002080-Mckee1">[18]</xref>
for parallel line separation discrimination, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e363.jpg" mimetype="image"></inline-graphic>
</inline-formula>
by
<xref ref-type="bibr" rid="pcbi.1002080-Ono1">[19]</xref>
for line length discrimination. Because our task did not involve interval-wise discrimination of pairs of stimuli, but rather absolute perception, it is to be expected that our noise magnitudes will be slightly higher.</p>
<p>The haptic noise SDs,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e364.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and assumed haptic noise SD,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e365.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, were uncoupled in the inlier participants' best-fit model (7), and coupled for participant 5's best fit model. The inlier participants' haptic noise SDs correspond to Weber fractions of between
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e366.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e367.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e368.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SE of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e369.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e370.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for participant 5. A Weber fraction of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e371.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was measured in humans by
<xref ref-type="bibr" rid="pcbi.1002080-Ernst1">[20]</xref>
for haptic size discrimination of objects between
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e372.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e373.jpg" mimetype="image"></inline-graphic>
</inline-formula>
mm in width using a similar haptic stimulus presentation apparatus, but with two fingers gripping the object rather than one finger probing the size. Because two fingers are likely to provide a more precise size measurement and because their participants performed interval discriminations of pairs of objects, our somewhat elevated Weber fraction are reasonable values. The inlier participants overestimated their haptic noise SDs, with their assumptions corresponding to Weber fractions of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e374.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e375.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e376.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SE of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e377.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). The consequences of overestimating haptic noise are that the observers do not achieve the level of disambiguation possible by fully incorporating the haptic cue, and apply prior knowledge about the ball's size and distance relatively more heavily (
<xref ref-type="fig" rid="pcbi-1002080-g003">Figure 3</xref>
).</p>
<p>Our analysis provided information about the observer models' prior knowledge, and found it strikingly similar to the sample statistics of the experimental stimuli's distances and sizes, with slightly higher SDs (remember the stimuli were uniformly distributed in the mm domain). The mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e378.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SE estimated prior distance mean and SD parameters,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e379.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e380.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, across all participants were
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e381.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e382.jpg" mimetype="image"></inline-graphic>
</inline-formula>
log-mm, respectively; the experimental distance mean and SD were
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e383.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e384.jpg" mimetype="image"></inline-graphic>
</inline-formula>
log-mm, respectively. The mean estimated prior size mean and SD parameters,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e385.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e386.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, across participants were
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e387.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e388.jpg" mimetype="image"></inline-graphic>
</inline-formula>
log-mm, respectively; the experimental size mean and SD were
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e389.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e390.jpg" mimetype="image"></inline-graphic>
</inline-formula>
log-mm, respectively. This indicates participants learned the range of possible stimuli presented in the experiment and applied that knowledge toward improving their judgments, to the effect of lowering the posterior variance (
<xref ref-type="fig" rid="pcbi-1002080-g003">Figure 3</xref>
). To further investigate the source of participants' prior knowledge, we ran our full analysis on only the first day of participants' trials, to measure what difference between inferred parameters exist between early and later in the experiment. We found that participants' first-day priors for
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e391.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e392.jpg" mimetype="image"></inline-graphic>
</inline-formula>
were
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e393.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e394.jpg" mimetype="image"></inline-graphic>
</inline-formula>
log-mm, respectively; and, participants' first-day priors for
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e395.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e396.jpg" mimetype="image"></inline-graphic>
</inline-formula>
were
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e397.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e398.jpg" mimetype="image"></inline-graphic>
</inline-formula>
log-mm, respectively. So, the prior means did not shift significantly (in terms of SE interval overlap), but the prior SD values did. It appears that participants rapidly learned the prior means, which are more easily estimable from experience and also may be assumed to some extent (the true prior distance mean is at the center of the virtual workbench, and the balls' sizes were directly observable in haptic condition trials). However, participants appeared to use more diffuse prior
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e399.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e400.jpg" mimetype="image"></inline-graphic>
</inline-formula>
parameters early in the experiment, which is consistent with making weaker prior assumptions about the range of distance/size variation (top row of
<xref ref-type="fig" rid="pcbi-1002080-g003">Figure 3</xref>
).</p>
<p>Our analysis provided estimates of participants' motor noise SD, whose mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e401.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SE across participants was
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e402.jpg" mimetype="image"></inline-graphic>
</inline-formula>
log-mm, which amounts to a SD of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e403.jpg" mimetype="image"></inline-graphic>
</inline-formula>
mm at a reach distance of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e404.jpg" mimetype="image"></inline-graphic>
</inline-formula>
mm, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e405.jpg" mimetype="image"></inline-graphic>
</inline-formula>
mm at reach distance of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e406.jpg" mimetype="image"></inline-graphic>
</inline-formula>
mm, the extremal distances presented in the experiment. A value of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e407.jpg" mimetype="image"></inline-graphic>
</inline-formula>
log-mm was reported
<xref ref-type="bibr" rid="pcbi.1002080-vanBeers1">[21]</xref>
under similar reaching conditions.</p>
<p>The sample-averaging models generally outperformed the MAP estimate models (Q IV) with respect to DIC scores. The inlier participants had
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e408.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values between
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e409.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e410.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(mean
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e411.jpg" mimetype="image"></inline-graphic>
</inline-formula>
SE of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e412.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), and participant 5's
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e413.jpg" mimetype="image"></inline-graphic>
</inline-formula>
estimate was
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e414.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Of course in our model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e415.jpg" mimetype="image"></inline-graphic>
</inline-formula>
must be integer-valued, but these real valued estimates are robust means across our MCMC analysis samples. An alternative interpretation of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e416.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is that it is an exponent applied to the posterior distribution, from which one sample is then drawn after renormalizing. For a Gaussian distribution, because
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e417.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, drawing
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e418.jpg" mimetype="image"></inline-graphic>
</inline-formula>
sample,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e419.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, from
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e420.jpg" mimetype="image"></inline-graphic>
</inline-formula>
yields a Gaussian-distributed:
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e421.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. And, drawing
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e422.jpg" mimetype="image"></inline-graphic>
</inline-formula>
samples,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e423.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, from the unexponentiated
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e424.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and averaging yields a sample mean with the same distribution:
<disp-formula>
<graphic xlink:href="pcbi.1002080.e425"></graphic>
</disp-formula>
</p>
<p>Between the DIC analysis and the validity of the inferred parameters, we conclude that model 7 is both structurally and parametrically accurate. This strongly supports model 7 and its encoded hypotheses as a coherent computational account of the underlying processes responsible for size-aided distance perception.</p>
</sec>
</sec>
<sec sec-type="" id="s4">
<title>Discussion</title>
<p>We conclude that humans can use haptic size cues to disambiguate and improve distance perception, but that the degree to which they incorporate haptic size information is lower than the ideal observer prescribes. We also conclude that the distance responses are best explained as a process of drawing several samples from the posterior distribution over distance given sensations, and averaging them to form a distance estimate. This behavior is broadly consistent with a Bayesian perceptual inference model in which mistaken generative knowledge about haptic cues is used, and beliefs about distance are accessed by drawing samples from an internal posterior distribution.</p>
<p>The brain's use of sensory cues for disambiguating others has been reported in a variety of perceptual domains, and broadly falls under the category “perceptual constancy”. Constancy effects, like the present distance constancy, involve situations in which an observer cannot unambiguously estimate a scene property due to confounding influences from other “nuisance” properties, and so leverages “auxiliary” cues (in this study, haptic size) to rule out inconsistent possibilities. Auxiliary disambiguation effects, like constancy, have other names in the literature, like “cue promotion”
<xref ref-type="bibr" rid="pcbi.1002080-Maloney1">[22]</xref>
, “simultaneous contrast”
<xref ref-type="bibr" rid="pcbi.1002080-Gerrits1">[23]</xref>
, and “taking-into-account”
<xref ref-type="bibr" rid="pcbi.1002080-Epstein1">[24]</xref>
. Many studies have reported “size constancy”, distance cues disambiguating object size perception
<xref ref-type="bibr" rid="pcbi.1002080-Boring1">[25]</xref>
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia4">[32]</xref>
, so it is not entirely surprising that size cues can conversely disambiguate distance perception.</p>
<p>Humans underestimating non-visual cue reliabilities and thus integrating them less strongly has been measured before by
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia2">[10]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia4">[32]</xref>
. There are several potential reasons for this phenomenon, one idea that has recently garnered support
<xref ref-type="bibr" rid="pcbi.1002080-Roach1">[33]</xref>
<xref ref-type="bibr" rid="pcbi.1002080-Sato1">[36]</xref>
is that sensory cues are used in accordance with their causal relationships to the unobserved scene properties: when the brain believes cues are unrelated to the desired scene property, it down-weights or outright ignores them. In the present study, this would mean the brain is unwilling to fully apply the haptic size cues because they might originate from a source independent of the ball, for instance imagine the hand touched a ball behind a photograph of a different ball; of course, such miscorrespondences are uncommon in nature, but examples like “prism adaptation” demonstrate the brain can accommodate and recalibrate in such situations. Another possibility is non-visual cues to spatial properties may be experienced far less frequently in life, and had fewer opportunities on which to be calibrated, so they are mistrusted.</p>
<p>Our finding that all our observers' responses are best modeled as sampling the posterior is consistent with recent studies and ideas about the representation and computation of probability in the brain. Using posterior sampling to generate responses in a choice task should manifest as probability matching of the options, a common finding in many behavioral tasks, including a perceptual audio-visual cue-combination task
<xref ref-type="bibr" rid="pcbi.1002080-Wozny1">[13]</xref>
. Sampling has also been used to provide a novel explanation for perceptual switching to multistable displays
<xref ref-type="bibr" rid="pcbi.1002080-Sundareswara1">[11]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002080-Schrater1">[37]</xref>
. Moreover, sampling provides an interpretation of neural activity in population codes and makes difficult probabilistic computations simple to neurally implement (see review by
<xref ref-type="bibr" rid="pcbi.1002080-Fiser1">[38]</xref>
).</p>
<p>Although Bayesian decisions are usually modeled as maximizing the posterior, maximization is not the best decision rule in all instances. MAP's optimality depends on both the task and the veridicality of the decision maker's posterior distribution. MAP assumes the decision maker's goal is to maximize the number of correct responses and that the posterior is based on the correct generative model for the data. When the posterior is not correct, basing responses on sampling provides exploration that can be used to improve the decision maker's policy. This idea has been extensively explored within reinforcement learning, where exploration is frequently implemented using a softmax decision strategy
<xref ref-type="bibr" rid="pcbi.1002080-Sutton1">[39]</xref>
where choices are stochastically sampled from an exponentiated distribution over the values of a set of discrete options. This idea can be generalized to the case of continuous decision variables. The value of an estimate is based on the reward function for the task. In our decision task, participants were “correct” whenever their choices fell within a narrow region relative to their posterior distribution. Approximating the experimental reward function as a delta function, the optimal strategy is to maximize the posterior. However, if we need to improve our estimate of the posterior, then it is important to estimate the error. Sampling from the posterior gives a set of values that can be used to compute any performance statistic, making it a reasonable strategy when an observer is needs information needed to learn - i.e. to assess and improve performance.</p>
<p>Though our models posit observers draw
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e426.jpg" mimetype="image"></inline-graphic>
</inline-formula>
samples directly from the posterior and averaging, any decision rule that is sensitive to the posterior variance may produce similar predictions – for instance, it is possible that participants internally exponentiate the posterior and draw exactly one sample (detailed in
<xref ref-type="sec" rid="s3">Results</xref>
). This means that for greater exponents, the posterior is more greatly sharpened; as the exponent approaches infinity, the posterior approaches a delta function located at the MAP estimate (after re-normalizing). This is a general strategy used in many machine learning domains to transition neatly between posteriors, MAP estimates, and “watered down” versions of the posterior. However we find this account unappealing because it implies that drawing more than one samples is less attractive to the observer's underlying perceptual mechanics than performing posterior exponentiation. Also, though our models assume posterior sample-averaging is a source for behavioral response variance (
<xref ref-type="fig" rid="pcbi-1002080-g004">Figure 4</xref>
), another possibility is that observers have uncertainty in the parameter values that characterize their generative knowledge itself, and actually draw samples of generative parameters instead of using deterministic parameter estimates. For instance, when combining haptic cues they may sample from an internal distribution over haptic reliability (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002080.e427.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). This could be a strategy for learning when the brain is uncertain about internal generative model parameters; because the observer receives feedback, and presumably wishes to calibrate the internal perceptual model, varying behavior by using different samples of internal model parameters avoids redundant feedback associated with similar behavioral responses to similar input stimuli.</p>
<p>Using a full probabilistic model of observers' sensation, perception, and decision-making processes provide us with answers to the four key questions we posed in the
<xref ref-type="sec" rid="s2">Model</xref>
section. This study's analysis of data reported by
<xref ref-type="bibr" rid="pcbi.1002080-Battaglia1">[4]</xref>
resulted in a much more comprehensive account of the computations responsible for distance and size perception. By formally characterizing a set of principled computational perception hypotheses, and choosing the best theoretical account of the measured phenomenology using Bayesian model selection tools, we demonstrated the power, robustness, and flexibility of this coherent framework for studying human cognition, and obtained deeper understanding of distance perception.</p>
</sec>
</body>
<back>
<ack>
<p>Frank Jaekel and Al Yonas for helpful feedback on the project. We also thank our reviewers for insightful and thorough feedback.</p>
</ack>
<fn-group>
<fn fn-type="conflict">
<p>The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>Our funding was provided by NIH grant R01EY015261 (
<ext-link ext-link-type="uri" xlink:href="http://grants.nih.gov/grants/funding/r01.htm">grants.nih.gov/grants/funding/r01.htm</ext-link>
), UMN Graduate School Fellowship (
<ext-link ext-link-type="uri" xlink:href="http://www.grad.umn.edu/fellowships">www.grad.umn.edu/fellowships</ext-link>
), NSF Graduate Research Fellowship (
<ext-link ext-link-type="uri" xlink:href="http://www.nsfgrfp.org">www.nsfgrfp.org</ext-link>
), UMN Doctoral Dissertation Fellowship (
<ext-link ext-link-type="uri" xlink:href="http://www.grad.umn.edu/fellowships/enrolled_students">www.grad.umn.edu/fellowships/enrolled_students</ext-link>
), and NIH NRSA grant F32EY019228-02 (
<ext-link ext-link-type="uri" xlink:href="http://grants.nih.gov/training/nrsa.htm">grants.nih.gov/training/nrsa.htm</ext-link>
). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="pcbi.1002080-Ittelson1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ittelson</surname>
<given-names>W</given-names>
</name>
</person-group>
<year>1951</year>
<article-title>Size as a cue to distance: Static localization.</article-title>
<source>Am J Psychol</source>
<volume>64</volume>
<fpage>54</fpage>
<lpage>67</lpage>
<pub-id pub-id-type="pmid">14819380</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Yonas1">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yonas</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Pettersen</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Granrud</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>1982</year>
<article-title>Infants' sensitivity to familiar size as information for distance.</article-title>
<source>Child Dev</source>
<volume>53</volume>
<fpage>1285</fpage>
<lpage>1290</lpage>
<pub-id pub-id-type="pmid">7140431</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Mershon1">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mershon</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Gogel</surname>
<given-names>W</given-names>
</name>
</person-group>
<year>1975</year>
<article-title>Failure of familiar size to determine a metric for visually perceived distance.</article-title>
<source>Percept Psychophys</source>
<volume>17</volume>
<fpage>101</fpage>
<lpage>106</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002080-Battaglia1">
<label>4</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Battaglia</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Schrater</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Kersten</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Auxiliary object knowledge influences visually-guided interception behavior.</article-title>
<fpage>145</fpage>
<lpage>152</lpage>
<comment>In: Proceedings of the 2nd symposium on Applied perception in graphics and visualization. ACM, volume 95</comment>
</element-citation>
</ref>
<ref id="pcbi.1002080-Knill1">
<label>5</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Richards</surname>
<given-names>W</given-names>
</name>
</person-group>
<year>1996</year>
<source>Perception as Bayesian inference</source>
<publisher-loc>Cambridge</publisher-loc>
<publisher-name>Cambridge University Press</publisher-name>
</element-citation>
</ref>
<ref id="pcbi.1002080-Kersten1">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kersten</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Object perception as Bayesian inference.</article-title>
<source>Annu Rev Psychol</source>
<volume>55</volume>
<fpage>271</fpage>
<lpage>304</lpage>
<pub-id pub-id-type="pmid">14744217</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Knill2">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>The Bayesian brain: the role of uncertainty in neural coding and computation.</article-title>
<source>Trends Neuroscis</source>
<volume>27</volume>
<fpage>712</fpage>
<lpage>719</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002080-Koerding1">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koerding</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Bayesian decision theory in sensorimotor control.</article-title>
<source>Trends Cogn Sci</source>
<volume>10</volume>
<fpage>319</fpage>
<lpage>326</lpage>
<pub-id pub-id-type="pmid">16807063</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Pearl1">
<label>9</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Pearl</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1988</year>
<source>Probabilistic reasoning in intelligent systems: networks of plausible inference</source>
<publisher-loc>San Mateo, CA</publisher-loc>
<publisher-name>Morgan Kaufmann</publisher-name>
</element-citation>
</ref>
<ref id="pcbi.1002080-Battaglia2">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Battaglia</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Aslin</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Bayesian integration of visual and auditory signals for spatial localization.</article-title>
<source>J Opt Soc Am A</source>
<volume>20</volume>
<fpage>1391</fpage>
<lpage>1397</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002080-Sundareswara1">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sundareswara</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Schrater</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Perceptual multistability predicted by search model for Bayesian decisions.</article-title>
<source>J Vis</source>
<volume>8</volume>
<fpage>12</fpage>
<lpage>1</lpage>
<pub-id pub-id-type="pmid">18842083</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Vul1">
<label>12</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Vul</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Goodman</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Griffths</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>One and done? Optimal decisions from very few samples.</article-title>
<comment>In: Proceedings of the 31st Annual Meeting of the Cognitive Science Society, Amsterdam, the Netherlands</comment>
</element-citation>
</ref>
<ref id="pcbi.1002080-Wozny1">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wozny</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Probability Matching as a Computational Strategy Used in Perception.</article-title>
<source>PLoS Comput Biol</source>
<volume>6</volume>
<fpage>e1000871</fpage>
<pub-id pub-id-type="pmid">20700493</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Battaglia3">
<label>14</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Battaglia</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Bayesian perceptual inference in linear Gaussian models.</article-title>
<comment>MIT Technical Report MIT-CSAIL-TR-2010-046</comment>
</element-citation>
</ref>
<ref id="pcbi.1002080-Clark1">
<label>15</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Clark</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1990</year>
<source>Data fusion for sensory information processing systems</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>Springer</publisher-name>
</element-citation>
</ref>
<ref id="pcbi.1002080-Gelman1">
<label>16</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gelman</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Carlin</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Stern</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Rubin</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2004</year>
<source>Bayesian data analysis</source>
<publisher-loc>Boca Raton, FL</publisher-loc>
<publisher-name>Chapman and Hall</publisher-name>
</element-citation>
</ref>
<ref id="pcbi.1002080-Spiegelhalter1">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spiegelhalter</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Best</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Carlin</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Van der Linde</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Bayesian measures of model complexity and fit.</article-title>
<source>J R Stat Soc Series B Stat Methodol</source>
<volume>64</volume>
<fpage>583</fpage>
<lpage>639</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002080-Mckee1">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mckee</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Welch</surname>
<given-names>L</given-names>
</name>
</person-group>
<year>1992</year>
<article-title>The precision of size constancy.</article-title>
<source>Vis Res</source>
<volume>32</volume>
<fpage>1447</fpage>
<lpage>1460</lpage>
<pub-id pub-id-type="pmid">1455718</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Ono1">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ono</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1967</year>
<article-title>Difference threshold for stimulus length under simultaneous and nonsimultaneous viewing conditions.</article-title>
<source>Percept Psychophys</source>
<volume>2</volume>
<fpage>201</fpage>
<lpage>207</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002080-Ernst1">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion.</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-vanBeers1">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Beers</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Haggard</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>The role of execution noise in movement variability.</article-title>
<source>J Neurophysiol</source>
<volume>91</volume>
<fpage>1050</fpage>
<pub-id pub-id-type="pmid">14561687</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Maloney1">
<label>22</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Maloney</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>1989</year>
<article-title>Statistical framework for robust fusion of depth information.</article-title>
<fpage>1154</fpage>
<lpage>1163</lpage>
<comment>In: Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. volume 1199</comment>
</element-citation>
</ref>
<ref id="pcbi.1002080-Gerrits1">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gerrits</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Vendrik</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1970</year>
<article-title>Simultaneous contrast, filling-in process and information processing in man's visual system.</article-title>
<source>Exp Brain Res</source>
<volume>11</volume>
<fpage>411</fpage>
<lpage>430</lpage>
<pub-id pub-id-type="pmid">5496938</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Epstein1">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Epstein</surname>
<given-names>W</given-names>
</name>
</person-group>
<year>1973</year>
<article-title>The process of‘taking-into-account’in visual perception.</article-title>
<source>Perception</source>
<volume>2</volume>
<fpage>267</fpage>
<lpage>85</lpage>
<pub-id pub-id-type="pmid">4794124</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Boring1">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boring</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>1940</year>
<article-title>Size constancy and Emmert's law.</article-title>
<source>Am J Psychol</source>
<volume>53</volume>
<fpage>293</fpage>
<lpage>295</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002080-Kilpatrick1">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kilpatrick</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Ittelson</surname>
<given-names>W</given-names>
</name>
</person-group>
<year>1953</year>
<article-title>The size-distance invariance hypothesis.</article-title>
<source>Psychol Rev</source>
<volume>60</volume>
<fpage>223</fpage>
<lpage>231</lpage>
<pub-id pub-id-type="pmid">13089000</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Epstein2">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Epstein</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Casey</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1961</year>
<article-title>The current status of the size-distance hypotheses.</article-title>
<source>Psychol Bull</source>
<volume>58</volume>
<fpage>491</fpage>
<lpage>514</lpage>
<pub-id pub-id-type="pmid">13890453</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Gogel1">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gogel</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Wist</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Harker</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>1963</year>
<article-title>A test of the invariance of the ratio of perceived size to perceived distance.</article-title>
<source>Am J Psychol</source>
<volume>76</volume>
<fpage>537</fpage>
<lpage>553</lpage>
<pub-id pub-id-type="pmid">14082652</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Ono2">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ono</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1966</year>
<article-title>Distal and proximal size under reduced and non-reduced viewing conditions.</article-title>
<source>Am J Psychol</source>
<volume>79</volume>
<fpage>234</fpage>
<lpage>241</lpage>
<pub-id pub-id-type="pmid">5915906</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Weintraub1">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weintraub</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Gardner</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>1970</year>
<article-title>Emmert's laws: size constancy vs. optical geometry.</article-title>
<source>Am J Psychol</source>
<volume>83</volume>
<fpage>40</fpage>
<lpage>54</lpage>
<pub-id pub-id-type="pmid">5449391</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Brenner1">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brenner</surname>
<given-names>E</given-names>
</name>
<name>
<surname>van Damme</surname>
<given-names>W</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Perceived distance, shape and size.</article-title>
<source>Vis Res</source>
<volume>39</volume>
<fpage>975</fpage>
<lpage>986</lpage>
<pub-id pub-id-type="pmid">10341949</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Battaglia4">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Battaglia</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Di Luca</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Schrater</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Machulla</surname>
<given-names>T</given-names>
</name>
<etal></etal>
</person-group>
<year>2010</year>
<article-title>Within-and Cross-Modal Distance Information Disambiguate Visual Size-Change Perception.</article-title>
<source>PLoS Comput Biol</source>
<volume>6</volume>
<fpage>e1000697</fpage>
<pub-id pub-id-type="pmid">20221263</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Roach1">
<label>33</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roach</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Heron</surname>
<given-names>J</given-names>
</name>
<name>
<surname>McGraw</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration.</article-title>
<source>Proc Biol Sci B</source>
<volume>273</volume>
<fpage>2159</fpage>
</element-citation>
</ref>
<ref id="pcbi.1002080-Ernst2">
<label>34</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Learning to integrate arbitrary signals from vision and touch.</article-title>
<source>J Vis</source>
<volume>7</volume>
<fpage>7</fpage>
<lpage>1</lpage>
<pub-id pub-id-type="pmid">18217847</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Koerding2">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koerding</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Quartz</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>J</given-names>
</name>
<etal></etal>
</person-group>
<year>2007</year>
<article-title>Causal inference in multisensory perception.</article-title>
<source>PLoS One</source>
<volume>2</volume>
<fpage>943</fpage>
</element-citation>
</ref>
<ref id="pcbi.1002080-Sato1">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sato</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Toyoizumi</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Aihara</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Bayesian inference explains perception of unity and ventriloquism aftereffect: identification of common sources of audiovisual stimuli.</article-title>
<source>Neural Comput</source>
<volume>19</volume>
<fpage>3335</fpage>
<lpage>3355</lpage>
<pub-id pub-id-type="pmid">17970656</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Schrater1">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schrater</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Sundareswara</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Theory and dynamics of perceptual bistability.</article-title>
<source>Adv Neural Inf Process Syst</source>
<volume>19</volume>
<fpage>1217</fpage>
</element-citation>
</ref>
<ref id="pcbi.1002080-Fiser1">
<label>38</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fiser</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Berkes</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Orbán</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Lengyel</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Statistically optimal perception and learning: from behavior to neural representations.</article-title>
<source>Trends Cogn Sci</source>
<volume>14</volume>
<fpage>119</fpage>
<lpage>130</lpage>
<pub-id pub-id-type="pmid">20153683</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002080-Sutton1">
<label>39</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Sutton</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Barto</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1998</year>
<source>Reinforcement learning: An introduction</source>
<publisher-loc>Cambridge, MA</publisher-loc>
<publisher-name>MIT Press</publisher-name>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002179 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002179 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3127804
   |texte=   How Haptic Size Sensations Improve Distance Perception
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:21738457" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024