Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Probabilistic Computation in Human Perception under Variability in Encoding Precision

Identifieur interne : 002209 ( Pmc/Curation ); précédent : 002208; suivant : 002210

Probabilistic Computation in Human Perception under Variability in Encoding Precision

Auteurs : Shaiyan Keshvari ; Ronald Van Den Berg ; Wei Ji Ma

Source :

RBID : PMC:3387023

Abstract

A key function of the brain is to interpret noisy sensory information. To do so optimally, observers must, in many tasks, take into account knowledge of the precision with which stimuli are encoded. In an orientation change detection task, we find that encoding precision does not only depend on an experimentally controlled reliability parameter (shape), but also exhibits additional variability. In spite of variability in precision, human subjects seem to take into account precision near-optimally on a trial-to-trial and item-to-item basis. Our results offer a new conceptualization of the encoding of sensory information and highlight the brain’s remarkable ability to incorporate knowledge of uncertainty during complex perceptual decision-making.


Url:
DOI: 10.1371/journal.pone.0040216
PubMed: 22768258
PubMed Central: 3387023

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3387023

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Probabilistic Computation in Human Perception under Variability in Encoding Precision</title>
<author>
<name sortKey="Keshvari, Shaiyan" sort="Keshvari, Shaiyan" uniqKey="Keshvari S" first="Shaiyan" last="Keshvari">Shaiyan Keshvari</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Van Den Berg, Ronald" sort="Van Den Berg, Ronald" uniqKey="Van Den Berg R" first="Ronald" last="Van Den Berg">Ronald Van Den Berg</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Ma, Wei Ji" sort="Ma, Wei Ji" uniqKey="Ma W" first="Wei Ji" last="Ma">Wei Ji Ma</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22768258</idno>
<idno type="pmc">3387023</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3387023</idno>
<idno type="RBID">PMC:3387023</idno>
<idno type="doi">10.1371/journal.pone.0040216</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">002209</idno>
<idno type="wicri:Area/Pmc/Curation">002209</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Probabilistic Computation in Human Perception under Variability in Encoding Precision</title>
<author>
<name sortKey="Keshvari, Shaiyan" sort="Keshvari, Shaiyan" uniqKey="Keshvari S" first="Shaiyan" last="Keshvari">Shaiyan Keshvari</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Van Den Berg, Ronald" sort="Van Den Berg, Ronald" uniqKey="Van Den Berg R" first="Ronald" last="Van Den Berg">Ronald Van Den Berg</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Ma, Wei Ji" sort="Ma, Wei Ji" uniqKey="Ma W" first="Wei Ji" last="Ma">Wei Ji Ma</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>A key function of the brain is to interpret noisy sensory information. To do so optimally, observers must, in many tasks, take into account knowledge of the precision with which stimuli are encoded. In an orientation change detection task, we find that encoding precision does not only depend on an experimentally controlled reliability parameter (shape), but also exhibits additional variability. In spite of variability in precision, human subjects seem to take into account precision near-optimally on a trial-to-trial and item-to-item basis. Our results offer a new conceptualization of the encoding of sensory information and highlight the brain’s remarkable ability to incorporate knowledge of uncertainty during complex perceptual decision-making.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Tolhurst, D" uniqKey="Tolhurst D">D Tolhurst</name>
</author>
<author>
<name sortKey="Movshon, J" uniqKey="Movshon J">J Movshon</name>
</author>
<author>
<name sortKey="Dean, A" uniqKey="Dean A">A Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Faisal, A" uniqKey="Faisal A">A Faisal</name>
</author>
<author>
<name sortKey="Selen, Lpj" uniqKey="Selen L">LPJ Selen</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gershon, Ed" uniqKey="Gershon E">ED Gershon</name>
</author>
<author>
<name sortKey="Wiener, Mc" uniqKey="Wiener M">MC Wiener</name>
</author>
<author>
<name sortKey="Latham, Pe" uniqKey="Latham P">PE Latham</name>
</author>
<author>
<name sortKey="Richmond, Bj" uniqKey="Richmond B">BJ Richmond</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shadlen, Mn" uniqKey="Shadlen M">MN Shadlen</name>
</author>
<author>
<name sortKey="Britten, Kh" uniqKey="Britten K">KH Britten</name>
</author>
<author>
<name sortKey="Newsome, Wt" uniqKey="Newsome W">WT Newsome</name>
</author>
<author>
<name sortKey="Movshon, Ja" uniqKey="Movshon J">JA Movshon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Green, Dm" uniqKey="Green D">DM Green</name>
</author>
<author>
<name sortKey="Swets, Ja" uniqKey="Swets J">JA Swets</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Matthias, E" uniqKey="Matthias E">E Matthias</name>
</author>
<author>
<name sortKey="Bublak, P" uniqKey="Bublak P">P Bublak</name>
</author>
<author>
<name sortKey="Costa, A" uniqKey="Costa A">A Costa</name>
</author>
<author>
<name sortKey="Mueller, Hj" uniqKey="Mueller H">HJ Mueller</name>
</author>
<author>
<name sortKey="Schneider, Wx" uniqKey="Schneider W">WX Schneider</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brady, Tf" uniqKey="Brady T">TF Brady</name>
</author>
<author>
<name sortKey="Tenenbaum, Jb" uniqKey="Tenenbaum J">JB Tenenbaum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brady, Tj" uniqKey="Brady T">TJ Brady</name>
</author>
<author>
<name sortKey="Alvarez, Ga" uniqKey="Alvarez G">GA Alvarez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Posner, Mi" uniqKey="Posner M">MI Posner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pestilli, F" uniqKey="Pestilli F">F Pestilli</name>
</author>
<author>
<name sortKey="Carrasco, M" uniqKey="Carrasco M">M Carrasco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Den Berg, R" uniqKey="Van Den Berg R">R Van den Berg</name>
</author>
<author>
<name sortKey="Shin, H" uniqKey="Shin H">H Shin</name>
</author>
<author>
<name sortKey="Chou, W C" uniqKey="Chou W">W-C Chou</name>
</author>
<author>
<name sortKey="George, R" uniqKey="George R">R George</name>
</author>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goris, Rlt" uniqKey="Goris R">RLT Goris</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
<author>
<name sortKey="Movshon, Ja" uniqKey="Movshon J">JA Movshon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Churchland, Ak" uniqKey="Churchland A">AK Churchland</name>
</author>
<author>
<name sortKey="Kiani, R" uniqKey="Kiani R">R Kiani</name>
</author>
<author>
<name sortKey="Chaudhuri, R" uniqKey="Chaudhuri R">R Chaudhuri</name>
</author>
<author>
<name sortKey="Wang, X J" uniqKey="Wang X">X-J Wang</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Churchland, Mm" uniqKey="Churchland M">MM Churchland</name>
</author>
<author>
<name sortKey="Yu, Bm" uniqKey="Yu B">BM Yu</name>
</author>
<author>
<name sortKey="Cunningham, Jp" uniqKey="Cunningham J">JP Cunningham</name>
</author>
<author>
<name sortKey="Sugrue, Lp" uniqKey="Sugrue L">LP Sugrue</name>
</author>
<author>
<name sortKey="Cohen, Mr" uniqKey="Cohen M">MR Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cohen, Mr" uniqKey="Cohen M">MR Cohen</name>
</author>
<author>
<name sortKey="Maunsell, Jhr" uniqKey="Maunsell J">JHR Maunsell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
<author>
<name sortKey="Dayan, P" uniqKey="Dayan P">P Dayan</name>
</author>
<author>
<name sortKey="Zemel, Rs" uniqKey="Zemel R">RS Zemel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Beck, Jm" uniqKey="Beck J">JM Beck</name>
</author>
<author>
<name sortKey="Latham, Pe" uniqKey="Latham P">PE Latham</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Navalpakkam, V" uniqKey="Navalpakkam V">V Navalpakkam</name>
</author>
<author>
<name sortKey="Beck, Jm" uniqKey="Beck J">JM Beck</name>
</author>
<author>
<name sortKey="Van Den Berg, R" uniqKey="Van Den Berg R">R Van den Berg</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Den Berg, R" uniqKey="Van Den Berg R">R Van den Berg</name>
</author>
<author>
<name sortKey="Vogel, M" uniqKey="Vogel M">M Vogel</name>
</author>
<author>
<name sortKey="Josic, K" uniqKey="Josic K">K Josic</name>
</author>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="French, Rs" uniqKey="French R">RS French</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pashler, H" uniqKey="Pashler H">H Pashler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Phillips, Wa" uniqKey="Phillips W">WA Phillips</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yule, Gu" uniqKey="Yule G">GU Yule</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Huang, W" uniqKey="Huang W">W Huang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nolte, Lw" uniqKey="Nolte L">LW Nolte</name>
</author>
<author>
<name sortKey="Jaarsma, D" uniqKey="Jaarsma D">D Jaarsma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palmer, J" uniqKey="Palmer J">J Palmer</name>
</author>
<author>
<name sortKey="Verghese, P" uniqKey="Verghese P">P Verghese</name>
</author>
<author>
<name sortKey="Pavel, M" uniqKey="Pavel M">M Pavel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eckstein, Mp" uniqKey="Eckstein M">MP Eckstein</name>
</author>
<author>
<name sortKey="Thomas, Jp" uniqKey="Thomas J">JP Thomas</name>
</author>
<author>
<name sortKey="Palmer, J" uniqKey="Palmer J">J Palmer</name>
</author>
<author>
<name sortKey="Shimozaki, Ss" uniqKey="Shimozaki S">SS Shimozaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baldassi, S" uniqKey="Baldassi S">S Baldassi</name>
</author>
<author>
<name sortKey="Verghese, P" uniqKey="Verghese P">P Verghese</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilken, P" uniqKey="Wilken P">P Wilken</name>
</author>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mackay, Dj" uniqKey="Mackay D">DJ MacKay</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Girshick, Ar" uniqKey="Girshick A">AR Girshick</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Natarajan, R" uniqKey="Natarajan R">R Natarajan</name>
</author>
<author>
<name sortKey="Murray, I" uniqKey="Murray I">I Murray</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L Shams</name>
</author>
<author>
<name sortKey="Zemel, Rs" uniqKey="Zemel R">RS Zemel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Girshick, Ar" uniqKey="Girshick A">AR Girshick</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seung, H" uniqKey="Seung H">H Seung</name>
</author>
<author>
<name sortKey="Sompolinsky, H" uniqKey="Sompolinsky H">H Sompolinsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saproo, S" uniqKey="Saproo S">S Saproo</name>
</author>
<author>
<name sortKey="Serences, Jt" uniqKey="Serences J">JT Serences</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Desimone, R" uniqKey="Desimone R">R Desimone</name>
</author>
<author>
<name sortKey="Duncan, J" uniqKey="Duncan J">J Duncan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Connor, Ce" uniqKey="Connor C">CE Connor</name>
</author>
<author>
<name sortKey="Gallant, Jl" uniqKey="Gallant J">JL Gallant</name>
</author>
<author>
<name sortKey="Preddie, Dc" uniqKey="Preddie D">DC Preddie</name>
</author>
<author>
<name sortKey="Van Essen, Dc" uniqKey="Van Essen D">DC Van Essen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcadams, Cj" uniqKey="Mcadams C">CJ McAdams</name>
</author>
<author>
<name sortKey="Maunsell, Jh" uniqKey="Maunsell J">JH Maunsell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sadaghiani, S" uniqKey="Sadaghiani S">S Sadaghiani</name>
</author>
<author>
<name sortKey="Hesselmann, G" uniqKey="Hesselmann G">G Hesselmann</name>
</author>
<author>
<name sortKey="Friston, Kj" uniqKey="Friston K">KJ Friston</name>
</author>
<author>
<name sortKey="Kleinschmidt, A" uniqKey="Kleinschmidt A">A Kleinschmidt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sapir, A" uniqKey="Sapir A">A Sapir</name>
</author>
<author>
<name sortKey="D Vossa, G" uniqKey="D Vossa G">G d’Avossa</name>
</author>
<author>
<name sortKey="Mcavoy, M" uniqKey="Mcavoy M">M McAvoy</name>
</author>
<author>
<name sortKey="Shulman, Gl" uniqKey="Shulman G">GL Shulman</name>
</author>
<author>
<name sortKey="Corbetta, M" uniqKey="Corbetta M">M Corbetta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reddy, L" uniqKey="Reddy L">L Reddy</name>
</author>
<author>
<name sortKey="Quian Quiroga, R" uniqKey="Quian Quiroga R">R Quian Quiroga</name>
</author>
<author>
<name sortKey="Wilken, P" uniqKey="Wilken P">P Wilken</name>
</author>
<author>
<name sortKey="Koch, C" uniqKey="Koch C">C Koch</name>
</author>
<author>
<name sortKey="Fried, I" uniqKey="Fried I">I Fried</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eng, Hy" uniqKey="Eng H">HY Eng</name>
</author>
<author>
<name sortKey="Chen, D" uniqKey="Chen D">D Chen</name>
</author>
<author>
<name sortKey="Jiang, Y" uniqKey="Jiang Y">Y Jiang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cowan, N" uniqKey="Cowan N">N Cowan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Luck, Sj" uniqKey="Luck S">SJ Luck</name>
</author>
<author>
<name sortKey="Vogel, Ek" uniqKey="Vogel E">EK Vogel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Whiteley, L" uniqKey="Whiteley L">L Whiteley</name>
</author>
<author>
<name sortKey="Sahani, M" uniqKey="Sahani M">M Sahani</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22768258</article-id>
<article-id pub-id-type="pmc">3387023</article-id>
<article-id pub-id-type="publisher-id">PONE-D-11-21878</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0040216</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology</subject>
<subj-group>
<subject>Anatomy and Physiology</subject>
<subj-group>
<subject>Neurological System</subject>
<subj-group>
<subject>Sensory Physiology</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group>
<subject>Computational Biology</subject>
<subj-group>
<subject>Computational Neuroscience</subject>
<subj-group>
<subject>Sensory Systems</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Computational Neuroscience</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Social and Behavioral Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Probabilistic Computation in Human Perception under Variability in Encoding Precision</article-title>
<alt-title alt-title-type="running-head">Inference under Variable Precision</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Keshvari</surname>
<given-names>Shaiyan</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<xref ref-type="author-notes" rid="fn1">
<sup>¤</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>van den Berg</surname>
<given-names>Ronald</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ma</surname>
<given-names>Wei Ji</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<addr-line>Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Ernst</surname>
<given-names>Marc O.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">Bielefeld University, Germany</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>wjma@bcm.edu</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: SK RvdB WJM. Performed the experiments: SK. Analyzed the data: SK RvdB. Wrote the paper: SK RvdB WJM. Developed the theory: SK RvdB WJM.</p>
</fn>
<fn id="fn1" fn-type="current-aff">
<label>¤</label>
<p>Current address: Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<pub-date pub-type="epub">
<day>29</day>
<month>6</month>
<year>2012</year>
</pub-date>
<volume>7</volume>
<issue>6</issue>
<elocation-id>e40216</elocation-id>
<history>
<date date-type="received">
<day>2</day>
<month>11</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>6</day>
<month>6</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Keshvari et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
<copyright-year>2012</copyright-year>
</permissions>
<abstract>
<p>A key function of the brain is to interpret noisy sensory information. To do so optimally, observers must, in many tasks, take into account knowledge of the precision with which stimuli are encoded. In an orientation change detection task, we find that encoding precision does not only depend on an experimentally controlled reliability parameter (shape), but also exhibits additional variability. In spite of variability in precision, human subjects seem to take into account precision near-optimally on a trial-to-trial and item-to-item basis. Our results offer a new conceptualization of the encoding of sensory information and highlight the brain’s remarkable ability to incorporate knowledge of uncertainty during complex perceptual decision-making.</p>
</abstract>
<counts>
<page-count count="9"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>The sensory information used by the brain to infer the state of the world is noisy: when the same stimulus is presented repeatedly, the neural activity it elicits varies considerably from trial to trial
<xref ref-type="bibr" rid="pone.0040216-Tolhurst1">[1]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Faisal1">[2]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Gershon1">[3]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Shadlen1">[4]</xref>
. As a consequence, an observer’s measurement of a task-relevant stimulus feature varies as well. The quality of the sensory information can be numerically expressed as precision. For instance, when the measurement follows a Gaussian distribution, precision could be defined as the inverse of the variance of this Gaussian.</p>
<p>Models of perception routinely assume that the precision with which a task-relevant stimulus feature is encoded is constant as long as the stimulus is held constant
<xref ref-type="bibr" rid="pone.0040216-Green1">[5]</xref>
. It is questionable, however, whether this assumption is justified, considering that factors such as fluctuations in alertness
<xref ref-type="bibr" rid="pone.0040216-Matthias1">[6]</xref>
, configural effects
<xref ref-type="bibr" rid="pone.0040216-Brady1">[7]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Brady2">[8]</xref>
, and covert shifts of attention
<xref ref-type="bibr" rid="pone.0040216-Posner1">[9]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Pestilli1">[10]</xref>
could make precision variable. If all factors were known and quantifiable, encoding precision could be specified exactly for each stimulus on each trial. However, as long as we are not able to model each possible contributing factor, it may be best to model precision as a random variable
<xref ref-type="bibr" rid="pone.0040216-VandenBerg1">[11]</xref>
. For example, the inverse variance of a Gaussian noise distribution could be drawn from a gamma distribution.</p>
<p>If encoding precision is a random variable, then the measurement of a task-relevant stimulus feature follows a doubly stochastic process. This idea translates to the level of neural coding, where a population pattern of activity could be Poisson-like with a mean amplitude (gain) that itself follows some other distribution. Recent physiological studies have reported evidence for doubly stochastic processes in cortex
<xref ref-type="bibr" rid="pone.0040216-Goris1">[12]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Churchland1">[13]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Churchland2">[14]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Cohen1">[15]</xref>
.</p>
<p>In the optimal-observer models of many tasks, precision does not only appear as part of the encoding model (a description of how measurements are generated), but also in the observer’s decision rule (a description of how measurements are transformed into a decision). In other words, in some tasks, in order to be optimal, an observer must take into account precision even if precision varies unpredictably across stimuli and trials. To distinguish this type of computation from computation in which the observer can be optimal using only a point estimate of each stimulus feature, we use the term “probabilistic computation”
<xref ref-type="bibr" rid="pone.0040216-Ma1">[16]</xref>
. At the neural level, probabilistic computation suggests that populations of neurons encode and compute with probability distributions over stimulus features
<xref ref-type="bibr" rid="pone.0040216-Ma1">[16]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Pouget1">[17]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Ma2">[18]</xref>
, instead of only point estimates.</p>
<p>Psychophysical evidence for probabilistic computation has been found in cue combination tasks
<xref ref-type="bibr" rid="pone.0040216-Ernst1">[19]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Knill1">[20]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Alais1">[21]</xref>
as well as more complex categorization tasks
<xref ref-type="bibr" rid="pone.0040216-Ma3">[22]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-VandenBerg2">[23]</xref>
. In these experiments, the encoding precision of the task-relevant feature was manipulated by varying a reliability parameter, for example the size of a blurred disc if its location is task-relevant, or contrast of a bar if its orientation is task-relevant. Since we propose here that factors other than this reliability parameter also contribute to variability in precision, the question arises whether observers optimally take into account this additional variability.</p>
<p>Here we use a visual change detection task
<xref ref-type="bibr" rid="pone.0040216-French1">[24]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Pashler1">[25]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Phillips1">[26]</xref>
to study whether precision is variable for a given value of the reliability parameter and whether observers take any variability in precision (whether or not due to the reliability parameter) into account optimally. Observers reported whether a change in the orientation of a stimulus occurred between two displays that each contained four stimuli (items). The reliability of the orientation information was controlled by shape and was randomly chosen for each stimulus. We pitted an optimal-observer model in which precision is completely determined by shape (“equal precision”) against one in which there is additional variability (“variable precision”). Both models assume that precision is known and optimally taken into account by the observer on an item-by-item and trial-by-trial basis. We compare these two models to several suboptimal models, where suboptimality can be caused by two factors. First, the observer might make a wrong assumption about precision. For example, if precision varies across stimuli at different locations, the observer might assume a single value of precision for all stimuli instead of using the individual values. Second, the observer might use a suboptimal decision rule instead of the optimal rule to integrate information from different locations. Considering all combinations of model elements – equal or variable precision, various observer assumptions about precision, and two possible integration rules – we arrive at a total of 14 models. We find that the empirical data for each individual subject are best described by the model in which precision is variable, the observer knows precision on an item-by-item and trial-by-trial basis, and uses the optimal integration rule.</p>
</sec>
<sec id="s2">
<title>Results</title>
<sec id="s2a">
<title>Experiment</title>
<p>Subjects were presented with two consecutive displays, each presented for 100 ms and separated by a 1-second blank screen. Each display contained a set of four randomly oriented ellipses that were identical between both displays except that with 50% probability, exactly one ellipse changed orientation between the first and the second screen (
<xref ref-type="fig" rid="pone-0040216-g001">Fig. 1A</xref>
). The magnitude of a change, if present, was drawn from a uniform distribution. On each trial, we first randomly chose the number of high-reliability stimuli (0 to 4, with equal probability); then, we randomly chose which of the stimuli had high reliability. Reliability was controlled by shape: high-reliability ellipses were more elongated than low-reliability ones, but had the same area. Subjects indicated whether or not a change occurred.</p>
<fig id="pone-0040216-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0040216.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Change detection under varying reliability.</title>
<p>A, Schematic of the trial procedure. Stimulus reliability was controlled by ellipse elongation. Set size was always 4. B, Hit and false-alarm rates as a function of the number of high-reliability stimuli (long ellipses),
<italic>N</italic>
<sub>H</sub>
. Hit rates are split out by whether the changing ellipse had high or low reliability. The Z-shape formed by the yellow, green, and blue lines is an instance of Simpson’s paradox (see
<xref ref-type="sec" rid="s2">Results</xref>
). C, Proportion of “change” reports in change trials as a function of the magnitude of change, for different values of
<italic>N</italic>
<sub>H</sub>
. Error bars represent ±1 s.e.m.</p>
</caption>
<graphic xlink:href="pone.0040216.g001"></graphic>
</fig>
<p>As expected, subjects became better at detecting a change as the number of high-reliability stimuli, denoted
<italic>N</italic>
<sub>H</sub>
, increased (
<xref ref-type="fig" rid="pone-0040216-g001">Fig. 1B</xref>
). While we did not find a significant effect of
<italic>N</italic>
<sub>H</sub>
on the false-alarm rate (one-way repeated-measures ANOVA,
<italic>F</italic>
(2.3,18.6) = 2.9,
<italic>p</italic>
 = 0.08; degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity), the effect of
<italic>N</italic>
<sub>H</sub>
on the hit rate was significant (
<italic>F</italic>
(1.7,13.9) = 25.1,
<italic>p</italic>
<0.001). This shows that our reliability manipulation was effective. Mean accuracy exceeded chance at every value of
<italic>N</italic>
<sub>H</sub>
(
<italic>t</italic>
(8)>5.5,
<italic>p</italic>
<10
<sup>−3</sup>
).</p>
<p>When we separate hit trials by the reliability of the changing stimulus, we see a distinctive Z-shaped pattern (
<xref ref-type="fig" rid="pone-0040216-g001">Fig. 1B</xref>
). The hit rate conditioned on the change being in a low-reliability stimulus decreases monotonically with
<italic>N</italic>
<sub>H</sub>
(
<italic>F</italic>
(3,24) = 9.7,
<italic>p</italic>
<0.001). We did not find an effect of
<italic>N</italic>
<sub>H</sub>
on the hit rate conditioned on the change being in a high-reliability stimulus (
<italic>F</italic>
(1.4,11.6) = 0.20,
<italic>p</italic>
 = 0.75). It might be counterintuitive that the low-reliability hit rate decreases and the high-reliability hit rate is flat, yet the unconditioned hit rate increases. This effect is an instance of Simpson’s paradox
<xref ref-type="bibr" rid="pone.0040216-Yule1">[27]</xref>
. The apparent contradiction is resolved by realizing that the relative contributions of the conditional rates change with
<italic>N</italic>
<sub>H</sub>
: the higher
<italic>N</italic>
<sub>H</sub>
, the larger the proportion of trials that fall in the high-reliability-change category. The Z-shaped pattern in our data confirms a prediction from an optimal model of a change discrimination task
<xref ref-type="bibr" rid="pone.0040216-Ma4">[28]</xref>
(elaborated below).</p>
<p>Next, we binned change trials by magnitude of change (8 bins) (
<xref ref-type="fig" rid="pone-0040216-g001">Fig. 1C</xref>
). A two-way repeated-measures ANOVA reveals significant main effects of magnitude of change (
<italic>F</italic>
(7,56) = 109.0,
<italic>p</italic>
<0.001) and of
<italic>N</italic>
<sub>H</sub>
(
<italic>F</italic>
(1.9,15.2) = 24.4,
<italic>p</italic>
<0.001) on the proportion of “change” reports, and a significant interaction (
<italic>F</italic>
(28,224) = 5.4,
<italic>p</italic>
<0.001). This indicates that larger changes are easier to detect.</p>
</sec>
<sec id="s2b">
<title>Models</title>
<p>We model the observer’s decision process as consisting of an encoding stage and a decision stage (
<xref ref-type="fig" rid="pone-0040216-g002">Fig. 2A</xref>
). In the encoding stage, precision is either completely determined by stimulus reliability (“equal precision” or EP), or a random variable itself (“variable precision” or VP). Precision is technically defined as Fisher information (see
<xref ref-type="sec" rid="s4">Methods</xref>
) and denoted
<italic>J</italic>
. For a given value of precision,
<italic>J</italic>
, the measurement
<italic>x</italic>
of an orientation
<italic>θ</italic>
follows a probability distribution
<italic>p</italic>
(
<italic>x</italic>
|
<italic>θ</italic>
;
<italic>J</italic>
). For this distribution, we assume a circular Gaussian (Von Mises) distribution, characterized by a concentration parameter
<italic>κ</italic>
that corresponds one-to-one with precision (see
<xref ref-type="supplementary-material" rid="pone.0040216.s003">Text S1</xref>
and
<xref ref-type="supplementary-material" rid="pone.0040216.s001">Fig. S1</xref>
). When precision is variable (VP), the measurement of a stimulus over many trials is described by a doubly stochastic process, formalized as the following integral:</p>
<fig id="pone-0040216-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0040216.g002</object-id>
<label>Figure 2</label>
<caption>
<title>A, Flow diagram of the decision process.</title>
<p>Models differ along three dimensions: whether precision is equal or variable, the observer’s assumption about precision, and the observer’s integration rule. B, Examples of probability density functions over encoding precision for a high-reliability and a low-reliability stimulus (long and short ellipse, respectively) in the variable-precision model. Dashed lines indicate the means. C, The generative model shows statistical dependencies between variables.
<italic>C</italic>
: change occurrence (0 or 1); Δ: magnitude of change;
<bold>Δ</bold>
: vector of change magnitudes at all locations;
<bold>θ</bold>
and
<bold>ϕ</bold>
: vectors of stimulus orientations in the first and second displays, respectively;
<bold>x</bold>
and
<bold>y</bold>
: vectors of measurements of the stimulus orientations. The spatial, temporal, and structural complexities of the task can be recognized in the vector nature of the orientation variables, the two “branches”, and the number of layers, respectively.</p>
</caption>
<graphic xlink:href="pone.0040216.g002"></graphic>
</fig>
<p>
<disp-formula>
<graphic xlink:href="pone.0040216.e001"></graphic>
<label>(1)</label>
</disp-formula>
where
<italic>p</italic>
(
<italic>x</italic>
|
<italic>θ</italic>
;
<italic>J</italic>
) is again the Von Mises distribution and the variability in
<italic>J</italic>
itself,
<italic>p</italic>
(
<italic>J</italic>
), is modeled as a gamma distribution (
<xref ref-type="fig" rid="pone-0040216-g002">Fig. 2B</xref>
). The distribution in Eq. (1) is a mixture of an infinite number of Von Mises distributions, each with its own precision; it is a circular analog of the Student t-distribution.</p>
<p>In the decision stage, the Bayes-optimal observer computes on each trial the probability that a change occurred and responds “change” if this probability is greater than 0.5. This is equivalent to responding “change” when.
<disp-formula>
<graphic xlink:href="pone.0040216.e002"></graphic>
<label>(2)</label>
</disp-formula>
</p>
<p>where
<italic>p</italic>
<sub>change</sub>
is the observer’s prior belief that a change occurred,
<italic>N</italic>
is the number of stimuli, and
<italic>d
<sub>i</sub>
</italic>
is the local decision variable (i.e., the posterior probability ratio of change occurrence at the
<italic>i</italic>
<sup>th</sup>
location, denoted
<italic>d
<sub>i</sub>
</italic>
; see
<xref ref-type="supplementary-material" rid="pone.0040216.s003">Text S1</xref>
for derivation).
<disp-formula>
<graphic xlink:href="pone.0040216.e003"></graphic>
<label>(3)</label>
</disp-formula>
</p>
<p>where
<italic>x
<sub>i</sub>
</italic>
and
<italic>y
<sub>i</sub>
</italic>
are the measurements of the
<italic>i</italic>
<sup>th</sup>
stimulus in the first and second displays, respectively,
<italic>κ
<sub>x,i</sub>
</italic>
and
<italic>κ
<sub>y,i</sub>
</italic>
are the corresponding concentration parameters of the noise, and
<italic>I</italic>
<sub>0</sub>
is the modified Bessel function of the first kind of order 0. Eq. (3) represents “weighting” by encoding precision (through
<italic>κ
<sub>x,i</sub>
</italic>
and
<italic>κ
<sub>y,i</sub>
</italic>
) on a trial-by-trial and item-by-item basis, in a way analogous to but more complex than cue combination. It is crucial that the optimal observer knows precision,
<italic>J</italic>
, and therefore
<italic>κ</italic>
, for each display and each item on each trial. Thus, even though Eq. (1) describes a doubly stochastic process over many trials, the optimal observer on a single trial knows the exact conditioned distribution
<italic>p</italic>
(
<italic>x</italic>
|
<italic>θ</italic>
,
<italic>J</italic>
).</p>
<p>In the decoding stage, the models we consider differ along two dimensions that can be understood in the context of Eqs. (3) and (2), respectively. The first dimension concerns the assumption that the observer makes about encoding precision:</p>
<list list-type="order">
<list-item>
<p>no assumption: complete knowledge of an item’s precision on each trial, i.e. the optimal model;</p>
</list-item>
<list-item>
<p>the assumption that precision is completely determined by shape, ignoring any other variability (suboptimal);</p>
</list-item>
<list-item>
<p>the assumption that precision is equal to the average precision across the display (which will vary across trials), reflecting a “gist” representation of precision (suboptimal);</p>
</list-item>
<list-item>
<p>the assumption that precision is equal throughout the experiment, thus ignoring both variations in shape and other variability (suboptimal).</p>
</list-item>
</list>
<p>If encoding precision is equal (EP), assumptions 1 and 2 are equivalent, because there is no additional variability to ignore. Assumptions 2 to 4 are formalized as variants of Eq. (3) in which the trial-to-trial and item-to-item concentration parameters are replaced by values that are solely determined by stimulus reliability, by the average value in the display, or by a single value throughout the experiment, respectively.</p>
<p>The second dimension along which the models differ is the integration rule that the observer applies to the local decision variables,
<italic>d
<sub>i</sub>
</italic>
. Specifically, besides the optimal rule, Eq. (2), we consider the suboptimal “Max” rule, according to which the observer responds based on the largest local decision variable. The Max decision rule is
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e004.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, with
<italic>k</italic>
a constant criterion. The Max rule has been used widely in signal detection theory models of visual search and is considered a reasonable description of human search behavior
<xref ref-type="bibr" rid="pone.0040216-Nolte1">[29]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Palmer1">[30]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Eckstein1">[31]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Baldassi1">[32]</xref>
(but see
<xref ref-type="bibr" rid="pone.0040216-Ma3">[22]</xref>
). The Max model together with the assumption of single precision (Assumption 4) is equivalent to the (also suboptimal) maximum-absolute-differences model we introduced for change detection in earlier work
<xref ref-type="bibr" rid="pone.0040216-Wilken1">[33]</xref>
(see
<xref ref-type="supplementary-material" rid="pone.0040216.s003">Text S1</xref>
). In total, this produces (4+3)⋅2 = 14 models, listed in
<xref ref-type="table" rid="pone-0040216-t001">Table 1</xref>
. The number of free parameters ranges from 3 to 5.</p>
<table-wrap id="pone-0040216-t001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0040216.t001</object-id>
<label>Table 1</label>
<caption>
<title>List of models considered.</title>
</caption>
<alternatives>
<graphic id="pone-0040216-t001-1" xlink:href="pone.0040216.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Model</td>
<td align="left" rowspan="1" colspan="1">Precision</td>
<td align="left" rowspan="1" colspan="1">Local decision variable (
<italic>d
<sub>i</sub>
</italic>
)</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Decision rule</td>
<td align="left" rowspan="1" colspan="1">#Pars</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">VVO</td>
<td align="left" rowspan="1" colspan="1">variable</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e005.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">with
<italic>κ
<sub>i</sub>
</italic>
the actual value at the
<italic>i</italic>
<sup>th</sup>
location</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e006.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>4</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VEO</td>
<td align="left" rowspan="1" colspan="1">variable</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e007.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">with
<italic>κ
<sub>i</sub>
</italic>
either
<italic>κ</italic>
<sub>low</sub>
or
<italic>κ</italic>
<sub>high</sub>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e008.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>4</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VAO</td>
<td align="left" rowspan="1" colspan="1">variable</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e009.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">where
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e010.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is an average over locations</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e011.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>4</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VSO</td>
<td align="left" rowspan="1" colspan="1">variable</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e012.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e013.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>5</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VVM</td>
<td align="left" rowspan="1" colspan="1">variable</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e014.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">with
<italic>κ
<sub>i</sub>
</italic>
the actual value at the
<italic>i</italic>
<sup>th</sup>
location</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e015.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>4</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VEM</td>
<td align="left" rowspan="1" colspan="1">variable</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e016.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">with
<italic>κ
<sub>i</sub>
</italic>
either
<italic>κ</italic>
<sub>low</sub>
or
<italic>κ</italic>
<sub>high</sub>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e017.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>4</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VAM</td>
<td align="left" rowspan="1" colspan="1">variable</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e018.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">where
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e019.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is an average over locations</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e020.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>4</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VSM</td>
<td align="left" rowspan="1" colspan="1">variable</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e021.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e022.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>4</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">EEO</td>
<td align="left" rowspan="1" colspan="1">equal</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e023.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">with
<italic>κ
<sub>i</sub>
</italic>
either
<italic>κ</italic>
<sub>low</sub>
or
<italic>κ</italic>
<sub>high</sub>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e024.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>3</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">EAO</td>
<td align="left" rowspan="1" colspan="1">equal</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e025.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">where
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e026.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is an average over locations</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e027.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>3</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">ESO</td>
<td align="left" rowspan="1" colspan="1">equal</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e028.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e029.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>4</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">EEM</td>
<td align="left" rowspan="1" colspan="1">equal</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e030.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">with
<italic>κ
<sub>i</sub>
</italic>
either
<italic>κ</italic>
<sub>low</sub>
or
<italic>κ</italic>
<sub>high</sub>
</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e031.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>3</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">EAM</td>
<td align="left" rowspan="1" colspan="1">equal</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e032.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">where
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e033.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is an average over locations</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e034.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>3</sub>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">ESM</td>
<td align="left" rowspan="1" colspan="1">equal</td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e035.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e036.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">
<sub>3</sub>
</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>The first letter stands for variable (V) or equal (E) encoding precision. The second letter indicates the observer’s assumption about encoding precision (V: variable; E: equal; A: sample average over locations; S: single value). The third letter stands for the optimal (O) or Max (M) integration rule. The equivalences (⇔) in the VSM and FSM models are explained in the
<xref ref-type="supplementary-material" rid="pone.0040216.s003">Text S1</xref>
; the notation |⋅| denotes circular distance.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="s2c">
<title>Model Comparison</title>
<p>We compared the models in two ways. First, we fitted each model’s parameters using maximum-likelihood estimation and computed
<italic>R</italic>
<sup>2</sup>
for the fits to the data in
<xref ref-type="fig" rid="pone-0040216-g001">Fig. 1B-C</xref>
(
<xref ref-type="fig" rid="pone-0040216-g003">Fig. 3</xref>
). The winning model was the one in which encoding precision is variable, observers optimally weight observations by their encoding precision, and they use the optimal rule for integrating information across locations (the VVO model from
<xref ref-type="table" rid="pone-0040216-t001">Table 1</xref>
). This model had the highest goodness-of-fit for hit and false-alarm rates (
<italic>R</italic>
<sup>2</sup>
 = 0.97), as well as for psychometric curves (
<italic>R</italic>
<sup>2</sup>
 = 0.89). Maximum-likelihood estimates of model parameters are given in
<xref ref-type="supplementary-material" rid="pone.0040216.s002">Table S1</xref>
.</p>
<fig id="pone-0040216-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0040216.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Fits of all 14 models to the data in
<xref ref-type="fig" rid="pone-0040216-g001">Fig. 1B-C</xref>
(axis labels and scales as there).</title>
<p>VP  =  variable precision; EP  =  equal precision; AP  =  average precision; SP  =  single precision. Error bars and shaded areas represent ±1 s.e.m. in the data and the model, respectively. The number in each plot is the
<italic>R</italic>
<sup>2</sup>
of the fit (for the left plot in each pair, computed over false-alarm rates and unconditioned hit rates). Frame color indicates model goodness of fit relative to the winning model, as obtained from Bayesian model comparison (
<xref ref-type="fig" rid="pone-0040216-g004">Fig. 4</xref>
).</p>
</caption>
<graphic xlink:href="pone.0040216.g003"></graphic>
</fig>
<p>Second, to distinguish the models in a more powerful way, we performed Bayesian model comparison
<xref ref-type="bibr" rid="pone.0040216-MacKay1">[34]</xref>
. This method computes the average likelihood over all parameter combinations, thereby automatically correcting for the number of free parameters (see Online Methods). The VVO model is the clear winner for each of the 9 subjects individually. Bayesian model comparison revealed that the log likelihood of the VVO model exceeds that of the next best model (VVM, which uses the Max rule) by the decisive difference of 15.4±17.3 (mean and s.e.m.) log likelihood points (
<xref ref-type="fig" rid="pone-0040216-g004">Fig. 4</xref>
).</p>
<fig id="pone-0040216-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0040216.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Log likelihood of each model relative to the VVO model.</title>
<p>Negative values indicate that the model is less likely than the VVO model. Error bars represent s.e.m. Abbreviations and color scheme are as in
<xref ref-type="table" rid="pone-0040216-t001">Table 1</xref>
.</p>
</caption>
<graphic xlink:href="pone.0040216.g004"></graphic>
</fig>
<p>The VVO model exceeds the EEO model – the best equal-precision model – by 36.3±6.3 log likelihood points, suggesting variability in encoding precision. To confirm that this advantage is not due to unmodeled noise at the decision level (the last two steps in
<xref ref-type="fig" rid="pone-0040216-g002">Fig. 2A</xref>
), we tested two EEO model variants that included such noise. In the first variant (“local decision noise”), we added zero-mean Gaussian noise with standard deviation
<italic>σ</italic>
<sub>local</sub>
to the log of the local decision variable,
<italic>d
<sub>i</sub>
</italic>
. In the second variant (“global decision noise”), we added the same type of noise (with standard deviation
<italic>σ</italic>
<sub>global</sub>
) to the log of the left-hand side of Eq. (2). The best-fitting values were
<italic>σ</italic>
<sub>local</sub>
 = 0.34±0.04 and
<italic>σ</italic>
<sub>global</sub>
 = 0.30±0.08. These values are small given that log decision variables generally ranged from about −4 to 20. Furthermore, we computed the model likelihoods of these two variants, and compared them to that of the winning model, VVO. The EEO models with local and global decision noise had log likelihoods of −37.1±7.0 and −38.2±7.0 relative to VVO, respectively. Moreover, the VVO model described the data better than both noisy models in all nine subjects individually. Thus, decision noise cannot account for the difference between the VVO and EEO models.</p>
</sec>
<sec id="s2d">
<title>Simpson’s Paradox</title>
<p>As
<xref ref-type="fig" rid="pone-0040216-g003">Fig. 3A</xref>
shows, the VVO model accounts for the characteristic Z-shape in the hit rates. The intuition behind the Z-shape in the context of the VVO model – and in fact any model that weights observations by their encoding precision – is as follows. The unconditioned hit rate increases with the number of high-reliability stimuli,
<italic>N</italic>
<sub>H</sub>
, because more information is available in the measurements, and the observer utilizes this information. The hit rate conditioned on the changing item having low reliability decreases with increasing
<italic>N</italic>
<sub>H</sub>
because a higher value of
<italic>N</italic>
<sub>H</sub>
means that more non-changing items have high reliability. Since in the VVO model, more precise measurements influence the decision more strongly, the overall evidence for “no change” becomes stronger and subjects become less likely to report “change”. Our result confirms a prediction from an earlier Bayesian model of change discrimination
<xref ref-type="bibr" rid="pone.0040216-Ma4">[28]</xref>
and provides additional evidence for probabilistic computation by humans in change detection.</p>
</sec>
</sec>
<sec id="s3">
<title>Discussion</title>
<p>We have found that in detecting a change among multiple stimuli: a) the encoding precision of a stimulus is variable even for a given value of stimulus reliability; b) observers near-optimally take into account both variations in stimulus reliability and the additional variability. These results raise several issues.</p>
<p>First, we modeled the distribution of encoding precision as a gamma distribution, with precision being independent across locations and trials. While this choice was convenient and led to good fits, alternatives to the gamma and independence assumptions must be considered.</p>
<p>Second, what causes variability in encoding precision? Several possible factors were mentioned in the introduction. In addition, the precision of memorized items could decay in variable ways, or precision could simply depend on the task-relevant feature value
<xref ref-type="bibr" rid="pone.0040216-Girshick1">[35]</xref>
. The relative contributions of these factors remain to be determined.</p>
<p>Third, variability in precision may have implications for encoding models in other tasks. It could potentially account for subject responses that are usually modeled as lapses, since those correspond to a precision of zero. Moreover, in cue combination, it has been suggested that sensory noise is best described by a mixture of a Gaussian and a uniform distribution
<xref ref-type="bibr" rid="pone.0040216-Natarajan1">[36]</xref>
or of two Gaussian distributions
<xref ref-type="bibr" rid="pone.0040216-Girshick2">[37]</xref>
. These mixture models can be regarded as approximations to a full-fledged doubly stochastic process as in Eq. (1), since the mixture components correspond to two different values of precision.</p>
<p>Fourth, how variability in precision can be recognized in neural activity depends on the neural coding scheme one subscribes to. In the framework of Poisson-like probabilistic population codes, variability in encoding precision might correspond to variability in population gain
<xref ref-type="bibr" rid="pone.0040216-Ma2">[18]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Seung1">[38]</xref>
. There is initial evidence that gain does vary
<xref ref-type="bibr" rid="pone.0040216-Goris1">[12]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Churchland1">[13]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Churchland2">[14]</xref>
, and this variability might in part be due to attentional factors
<xref ref-type="bibr" rid="pone.0040216-Saproo1">[39]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Desimone1">[40]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Connor1">[41]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-McAdams1">[42]</xref>
. Neuroimaging studies have found that trial-to-trial fluctuations in perceptual performance correlate with fluctuations in stimulus-independent, ongoing neural activity in dorsal anterior cingulate cortex, dorsolateral prefrontal cortex, and dorsal parietal areas
<xref ref-type="bibr" rid="pone.0040216-Sadaghiani1">[43]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Sapir1">[44]</xref>
. This activity might in part reflect the attentional state of the observer, in which case their fluctuations might partially account for variability in precision.</p>
<p>Fifth, how can a neural population “know” encoding precision for use in decision-making? Again in probabilistic population coding, a neural population encodes on each trial a full likelihood function over the stimulus, whose inverse width represents the precision/certainty associated with that stimulus on that trial
<xref ref-type="bibr" rid="pone.0040216-Ma2">[18]</xref>
. Thus, encoding precision is implicitly known on a trial-by-trial basis and can be used in downstream computation. A next step would be to use probabilistic population codes to design a neural network that takes Poisson-like representations of the individual stimuli in both displays as input and has an output layer that encodes the probability that a change occurred (potentially in the medial temporal lobe
<xref ref-type="bibr" rid="pone.0040216-Reddy1">[45]</xref>
).</p>
<p>Our work illustrates a new role for change detection in psychology. Traditionally, change detection has only been used to probe capacity limitations in short-term memory
<xref ref-type="bibr" rid="pone.0040216-Pashler1">[25]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Eng1">[46]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Cowan1">[47]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Luck1">[48]</xref>
. Viewing change detection as inference on noisy sensory measurements is relatively new
<xref ref-type="bibr" rid="pone.0040216-Wilken1">[33]</xref>
. Here, we have demonstrated the use of change detection in studying whether the brain computes with probability distributions. Behavioral evidence for probabilistic computation had so far been largely limited to tasks with relatively simple statistical structures, such as cue combination. Change detection is a case study of complex inference, because of the presence of multiple relevant stimuli (spatial complexity), because stimulus information must be integrated into an abstract categorical judgment (structural complexity), and because perception interacts with visual short-term memory (temporal complexity).</p>
<p>A final caveat. It is tempting to equate optimality with the notion that the brain computes with probabilities on an individual-trial basis (probabilistic computation). These are, however, orthogonal notions
<xref ref-type="bibr" rid="pone.0040216-Ma1">[16]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Whiteley1">[49]</xref>
. In some tasks, such as judging whether an oriented stimulus is tilted to the left or to the right, optimality can be attained using only point estimates and does not require trial-by-trial representations of probability. Conversely, an observer might take into account precision – and perhaps represent probability – on a trial-by-trial and item-by-item basis, but do so in a suboptimal way. Here, we have provided evidence for both optimality and probabilistic computation in change detection. To test for probabilistic computation, we varied reliability unpredictably without giving trial-to-trial feedback, and compared models in which the observer does or does not take into account precision on a trial-by-trial and item-by-item basis. To test for optimality, we compared the optimal decision rule against a plausible suboptimal one, the Max rule. Thus, we were to some extent able to disentangle Bayesian optimality from probabilistic computation. We speculate that as task complexity increases, optimality will break down at some point, but probabilistic computation continues to be performed – in other words, humans are suboptimal, probabilistic observers.</p>
</sec>
<sec sec-type="methods" id="s4">
<title>Methods</title>
<sec id="s4a">
<title>Stimuli</title>
<p>Stimuli were displayed on a 21″ LCD monitor at a viewing distance of 60 cm. Each stimulus display contained four oriented ellipses. Two types of ellipses were used: “long” and “short” ones. “Long” ellipses had minor and major axes of 0.37 and 1.02 degrees of visual angle (deg), respectively. “Short” ellipses had the same area, but their elongations were determined separately for each subject (see Procedure). On each trial, ellipse centers were chosen by placing one at a random location on an imaginary circle of radius 7 deg around the screen center, placing the next one 90° counterclockwise from the first along the circle, etc., until all four ellipses had been placed. This spacing was sufficiently large to avoid crowding effects. Each ellipse position was jittered by a random amount between −0.3 and 0.3 deg in
<italic>x</italic>
- and
<italic>y</italic>
-directions (independently). Stimulus and background luminances were 95.7 and 33.1 cd/m
<sup>2</sup>
, respectively.</p>
</sec>
<sec id="s4b">
<title>Subjects</title>
<p>Nine subjects participated (6 naïve, 3 authors; 1 female). All were between 22 and 32 years old and had normal or corrected-to-normal vision. The study was approved by the Institutional Review Board for Human Subject Research for Baylor College of Medicine; all subjects gave written informed consent.</p>
</sec>
<sec id="s4c">
<title>Procedure</title>
<p>There were three types of trial blocks: testing blocks, practice blocks, and threshold blocks. In each testing block, a trial began with a blank screen displaying a central fixation cross for 1000 ms. The first stimulus display was presented for 100 ms, followed by a delay period of 1000 ms, followed by a second stimulus display for 100 ms. On each trial, the number of long ellipses was chosen randomly with equal probability from 0 to 4. The locations of the long ellipses were chosen randomly given the constraint of their total number; all other ellipses were short. The orientation of each ellipse was drawn independently from a uniform distribution over all possible orientations. The second stimulus display was identical to the first, except that there was a 50% chance that one of the ellipses had changed its orientation by an angle drawn from a uniform distribution over all possible orientations. Following the second display, the observer pressed a key to indicate whether there was a change between the first and second displays. A response caused the next trial to begin. No trial-by-trial feedback was given. A practice block was identical to a testing block, except that all stimuli on a given trial had the same reliability, which was varied randomly across trials. Stimulus presentation time was initially 333 ms and decreased by 33 ms every 32 trials, allowing the observer to easy into the task. Feedback was given on each trial. The practice session consisted of 256 trials. A threshold block was identical to a practice block but used only the shortest stimulus presentation time (100 ms), and was 400 trials in length.</p>
<p>At the beginning of each session, subjects were informed in lay terms about the distributions from which the stimuli were drawn (e.g., “The change is equally likely to be of any magnitude.”). Each observer completed three sessions on separate days. The first session began with a practice block only for naïve subjects. All subjects then did one threshold block of 400 trials. We fitted a cumulative normal distribution to accuracy as a function of ellipse elongation and extrapolated the performance to the maximal elongation. If the resulting performance was equal to or greater than 75%, we found the elongation of a “short” ellipse from the 65% correct point of the fitted curve. If the resulting extrapolated performance was lower than 75%, the observer repeated the threshold block. If extrapolated performance on the repeated block was again lower than 75%, the observer was excluded from the study. Testing blocks had 400, 800, and 800 testing trials per session, respectively. There were two timed breaks spread evenly for the 400-trial session and four in the 800-trial ones. During each break, a screen showing the percentage correct in the block was displayed. Cumulative performance was shown at the end of each session.</p>
</sec>
<sec id="s4d">
<title>Encoding Model</title>
<p>For convenience, all orientations were remapped from [−π/2,π/2) to [−π,π). For a true stimulus orientation
<italic>θ</italic>
, we assumed the measurement
<italic>x</italic>
to follow a Von Mises distribution,
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e037.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where
<italic>κ</italic>
is the concentration parameter.
<italic>κ</italic>
is determined by the amount of resource allocated to the stimulus,
<italic>J</italic>
. The relationship between
<italic>J</italic>
and
<italic>κ</italic>
is
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e038.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where
<italic>I</italic>
<sub>1</sub>
is the modified Bessel function of the first kind of order 1 (see
<xref ref-type="supplementary-material" rid="pone.0040216.s003">Text S1</xref>
). In the EP model,
<italic>J</italic>
is determined by ellipse elongation only. In the VP model,
<italic>J</italic>
is drawn from a gamma distribution with mean
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e039.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and scale parameter
<italic>τ</italic>
, where
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e040.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is determined by ellipse elongation (it is accordingly denoted
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e041.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e042.jpg" mimetype="image"></inline-graphic>
</inline-formula>
).</p>
</sec>
<sec id="s4e">
<title>Model Predictions</title>
<p>We are interested in computing the probability predicted by a model of reporting “change” for a set of stimuli and corresponding reliabilities, given a set of parameter values. This probability is equal to the probability that
<italic>d</italic>
>1 for measurements (
<bold>x</bold>
,
<bold>y</bold>
) drawn using the generative model with the given parameters. This probability only depends on the magnitude of change, Δ, the number of high-reliability stimuli,
<italic>N</italic>
<sub>H</sub>
, and whether a change, if any, occurred in a low-reliability or a high-reliability stimulus. We binned Δ every 3 degrees between 0 and 90 degrees, resulting in 31 values;
<italic>N</italic>
<sub>H</sub>
takes 5 possible values, resulting in 31⋅5⋅2 = 310 trial types. For each trial type, we approximated the distributions of
<bold>x</bold>
and
<bold>y</bold>
using a Monte Carlo simulation with 1,000 samples. For each sample, the model’s decision rule was applied, and the proportion of “change” responses among all samples was determined. This returned an estimate of the model’s probability of reporting “change” on a given trial, for the given parameter values. The entire procedure was repeated for all parameter combinations.</p>
</sec>
<sec id="s4f">
<title>Model Fitting</title>
<p>For a given model, we denote the vector of model parameters by
<bold>t</bold>
. The likelihood of
<bold>t</bold>
is the probability of the human subject’s empirical responses given
<bold>t</bold>
:
<disp-formula>
<graphic xlink:href="pone.0040216.e043"></graphic>
</disp-formula>
</p>
<p>where
<italic>N</italic>
<sub>trials</sub>
is the total number of trials,
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e044.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the subject’s response on the
<italic>k</italic>
<sup>th</sup>
trial, and stimuli
<italic>
<sub>k</sub>
</italic>
is shorthand for the stimulus orientations and their reliabilities in both displays. The maximum-likelihood estimate of the parameters is the value of
<bold>t</bold>
that maximizes
<italic>L</italic>
(
<bold>t</bold>
).</p>
</sec>
<sec id="s4g">
<title>Bayesian Model Comparison</title>
<p>Each model
<italic>m</italic>
produces a prediction about the response on each trial,
<italic>p</italic>
(
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e045.jpg" mimetype="image"></inline-graphic>
</inline-formula>
|stimuli
<italic>
<sub>k</sub>
</italic>
,
<bold>t</bold>
,
<italic>m</italic>
). Bayesian model comparison
<xref ref-type="bibr" rid="pone.0040216-MacKay1">[34]</xref>
consists of calculating for each model the probability of finding a subject’s actual responses under this distribution, averaged over free parameters:</p>
<p>
<disp-formula>
<graphic xlink:href="pone.0040216.e046"></graphic>
</disp-formula>
</p>
<p>It is convenient to compute the logarithm of
<italic>L</italic>
(
<italic>m</italic>
) and write it as.
<disp-formula>
<graphic xlink:href="pone.0040216.e047"></graphic>
<label>(4)</label>
</disp-formula>
</p>
<p>where
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e048.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e049.jpg" mimetype="image"></inline-graphic>
</inline-formula>
This form prevents numerical problems, since the exponential in the integrand of Eq. (4) is now of order 1 near the maximum-likelihood value of
<bold>t</bold>
. For the parameter prior, we assume a uniform distribution across some range, whose size we denote
<italic>R
<sub>j</sub>
</italic>
for the
<italic>j</italic>
<sup>th</sup>
parameter. Ranges were as follows: [1,100] for
<italic>J</italic>
<sub>low</sub>
,
<italic>J</italic>
<sub>high</sub>
,
<italic>J</italic>
<sub>assumed</sub>
,
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e050.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e051.jpg" mimetype="image"></inline-graphic>
</inline-formula>
;
<xref ref-type="bibr" rid="pone.0040216-Tolhurst1">[1]</xref>
,
<xref ref-type="bibr" rid="pone.0040216-Palmer1">[30]</xref>
for
<italic>τ</italic>
; [−2.2, 51.8] for the Max model criterion
<italic>k</italic>
; [0.3, 0.7] for
<italic>p</italic>
<sub>change</sub>
. Eq. (4) becomes
<inline-formula>
<inline-graphic xlink:href="pone.0040216.e052.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where dim
<bold>t</bold>
is the number of parameters. We approximated the integral through a Riemann sum. We tested the parameter fitting and model comparison code on fake data generated from each of the 14 models; parameters were estimated correctly and the model used to generate the data always won, showing that the models are distinguishable using this method.</p>
</sec>
</sec>
<sec sec-type="supplementary-material" id="s5">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0040216.s001">
<label>Figure S1</label>
<caption>
<p>
<bold>Encoding precision as a function of the concentration parameter of the Von Mises distribution.</bold>
The dashed line is the identity line.</p>
<p>(TIFF)</p>
</caption>
<media xlink:href="pone.0040216.s001.tiff" mimetype="image" mime-subtype="tiff">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0040216.s002">
<label>Table S1</label>
<caption>
<p>
<bold>Parameter values for all models.</bold>
Mean and s.e.m. are over subjects.</p>
<p>(DOCX)</p>
</caption>
<media xlink:href="pone.0040216.s002.docx" mimetype="application" mime-subtype="msword">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0040216.s003">
<label>Text S1</label>
<caption>
<p>
<bold>Supporting information.</bold>
Contains: Relationship between precision and concentration parameter; Equal-precision and variable-precision models; Optimally inferring change occurrence; The Max model</p>
<p>(DOCX)</p>
</caption>
<media xlink:href="pone.0040216.s003.docx" mimetype="application" mime-subtype="msword">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>
<bold>Funding: </bold>
WJM is supported by award number R01EY020958 from the National Eye Institute (
<ext-link ext-link-type="uri" xlink:href="http://www.nei.nih.gov/">http://www.nei.nih.gov/</ext-link>
). RvdB is supported by the Netherlands Organisation for Scientific Research (
<ext-link ext-link-type="uri" xlink:href="http://www.nwo.nl">http://www.nwo.nl</ext-link>
). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="pone.0040216-Tolhurst1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tolhurst</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Movshon</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Dean</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1982</year>
<article-title>The statistical reliability of signals in single neurons in cat and monkey visual cortex.</article-title>
<source>Vision Research</source>
<volume>23</volume>
<fpage>775</fpage>
<lpage>785</lpage>
<pub-id pub-id-type="pmid">6623937</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Faisal1">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Faisal</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Selen</surname>
<given-names>LPJ</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Noise in the nervous system.</article-title>
<source>Nat Rev Neurosci</source>
<volume>9</volume>
<fpage>292</fpage>
<lpage>303</lpage>
<pub-id pub-id-type="pmid">18319728</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Gershon1">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gershon</surname>
<given-names>ED</given-names>
</name>
<name>
<surname>Wiener</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Richmond</surname>
<given-names>BJ</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Coding strategies in monkey V1 and inferior temporal cortices.</article-title>
<source>Journal of Neurophysiology</source>
<volume>79</volume>
<fpage>1135</fpage>
<lpage>1144</lpage>
<pub-id pub-id-type="pmid">9497396</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Shadlen1">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shadlen</surname>
<given-names>MN</given-names>
</name>
<name>
<surname>Britten</surname>
<given-names>KH</given-names>
</name>
<name>
<surname>Newsome</surname>
<given-names>WT</given-names>
</name>
<name>
<surname>Movshon</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>A computational analysis of the relationship between neuronal and behavioral responses to visual motion.</article-title>
<source>J Neurosci</source>
<volume>16</volume>
<fpage>1486</fpage>
<lpage>1510</lpage>
<pub-id pub-id-type="pmid">8778300</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Green1">
<label>5</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Green</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Swets</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>1966</year>
<article-title>Signal detection theory and psychophysics.</article-title>
<source>Los Altos, CA: John Wiley & Sons</source>
</element-citation>
</ref>
<ref id="pone.0040216-Matthias1">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Matthias</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Bublak</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Costa</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Mueller</surname>
<given-names>HJ</given-names>
</name>
<name>
<surname>Schneider</surname>
<given-names>WX</given-names>
</name>
<etal></etal>
</person-group>
<year>2009</year>
<article-title>Attentional and sensory effects of lowered levels of intrinsic alertness.</article-title>
<source>Neuropsychologia</source>
<volume>47</volume>
<fpage>3255</fpage>
<lpage>3264</lpage>
<pub-id pub-id-type="pmid">19682470</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Brady1">
<label>7</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Brady</surname>
<given-names>TF</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>JB</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Encoding higher-order structure in visual working memory: A probabilistic model.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Ohlsson</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Catrambone</surname>
<given-names>R</given-names>
</name>
</person-group>
<fpage>411</fpage>
<lpage>416</lpage>
<comment>Proceedings of the 32nd Annual Conference of the Cognitive Science Society.</comment>
<comment>Austin, TX: Cognitive Science.</comment>
</element-citation>
</ref>
<ref id="pone.0040216-Brady2">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brady</surname>
<given-names>TJ</given-names>
</name>
<name>
<surname>Alvarez</surname>
<given-names>GA</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Hierarchical encoding in visual working memory: ensemble statistics bias memory for individual items.</article-title>
<source>Psych Science</source>
<volume>22</volume>
<fpage>384</fpage>
<lpage>392</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Posner1">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Posner</surname>
<given-names>MI</given-names>
</name>
</person-group>
<year>1980</year>
<article-title>Orienting of attention.</article-title>
<source>Q J Exp Psychol</source>
<volume>32</volume>
<fpage>3</fpage>
<lpage>25</lpage>
<pub-id pub-id-type="pmid">7367577</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Pestilli1">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pestilli</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Carrasco</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Attention enhances contrast sensitivity at cued and impairs it at uncued locations.</article-title>
<source>Vision Research</source>
<volume>45</volume>
<fpage>1867</fpage>
<lpage>1875</lpage>
<pub-id pub-id-type="pmid">15797776</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-VandenBerg1">
<label>11</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Van den Berg</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Shin</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Chou</surname>
<given-names>W-C</given-names>
</name>
<name>
<surname>George</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
</person-group>
<year>2012</year>
<article-title>Variability in encoding precision accounts for visual short-term memory limitations.</article-title>
<source>Proc Natl Acad Sci U S A 109: published online May 11</source>
</element-citation>
</ref>
<ref id="pone.0040216-Goris1">
<label>12</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Goris</surname>
<given-names>RLT</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
<name>
<surname>Movshon</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>2012</year>
<article-title>Using a doubly-stochastic model to analyze neuronal activity in the visual cortex. Cosyne Abstracts.</article-title>
<source>Salt Lake City</source>
</element-citation>
</ref>
<ref id="pone.0040216-Churchland1">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Churchland</surname>
<given-names>AK</given-names>
</name>
<name>
<surname>Kiani</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Chaudhuri</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>X-J</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
<etal></etal>
</person-group>
<year>2011</year>
<article-title>Variance as a signature of neural computations during decision-making.</article-title>
<source>Neuron</source>
<volume>69</volume>
<fpage>818</fpage>
<lpage>831</lpage>
<pub-id pub-id-type="pmid">21338889</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Churchland2">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Churchland</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>BM</given-names>
</name>
<name>
<surname>Cunningham</surname>
<given-names>JP</given-names>
</name>
<name>
<surname>Sugrue</surname>
<given-names>LP</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>MR</given-names>
</name>
<etal></etal>
</person-group>
<year>2010</year>
<article-title>Stimulus onset quenches neural variability: a widespread cortical phenomenon.</article-title>
<source>Nat Neurosci</source>
<volume>13</volume>
<fpage>369</fpage>
<lpage>378</lpage>
<pub-id pub-id-type="pmid">20173745</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Cohen1">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cohen</surname>
<given-names>MR</given-names>
</name>
<name>
<surname>Maunsell</surname>
<given-names>JHR</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>A neuronal population measure of attention predicts behavioral performance on individual trials.</article-title>
<source>J Neurosci</source>
<volume>30</volume>
<fpage>15241</fpage>
<lpage>15253</lpage>
<pub-id pub-id-type="pmid">21068329</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Ma1">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Signal detection theory, uncertainty, and Poisson-like population codes.</article-title>
<source>Vision Research</source>
<volume>50</volume>
<fpage>2308</fpage>
<lpage>2319</lpage>
<pub-id pub-id-type="pmid">20828581</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Pouget1">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Dayan</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Zemel</surname>
<given-names>RS</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Inference and computation with population codes.</article-title>
<source>Annual Review of Neuroscience</source>
<volume>26</volume>
<fpage>381</fpage>
<lpage>410</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Ma2">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Bayesian inference with probabilistic population codes.</article-title>
<source>Nat Neurosci</source>
<volume>9</volume>
<fpage>1432</fpage>
<lpage>1438</lpage>
<pub-id pub-id-type="pmid">17057707</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Ernst1">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion.</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Knill1">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>The Bayesian brain: the role of uncertainty in neural coding and computation.</article-title>
<source>Trends Neurosci</source>
<volume>27</volume>
<fpage>712</fpage>
<lpage>719</lpage>
<pub-id pub-id-type="pmid">15541511</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Alais1">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>The ventriloquist effect results from near-optimal bimodal integration.</article-title>
<source>Curr Biol</source>
<volume>14</volume>
<fpage>257</fpage>
<lpage>262</lpage>
<pub-id pub-id-type="pmid">14761661</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Ma3">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Navalpakkam</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Van den Berg</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Behavior and neural basis of near-optimal visual search.</article-title>
<source>Nat Neurosci</source>
<volume>14</volume>
<fpage>783</fpage>
<lpage>790</lpage>
<pub-id pub-id-type="pmid">21552276</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-VandenBerg2">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van den Berg</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Vogel</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Josic</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Optimal inference of sameness.</article-title>
<source>Proc Natl Acad Sci U S A</source>
<volume>109</volume>
<fpage>3178</fpage>
<lpage>3183</lpage>
<pub-id pub-id-type="pmid">22315400</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-French1">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>French</surname>
<given-names>RS</given-names>
</name>
</person-group>
<year>1953</year>
<article-title>The discrimination of dot patterns as a function of number and average separation of dots.</article-title>
<source>J Exp Psychol</source>
<volume>46</volume>
<fpage>1</fpage>
<lpage>9</lpage>
<pub-id pub-id-type="pmid">13069660</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Pashler1">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pashler</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1988</year>
<article-title>Familiarity and visual change detection.</article-title>
<source>Percept Psychophys</source>
<volume>44</volume>
<fpage>369</fpage>
<lpage>378</lpage>
<pub-id pub-id-type="pmid">3226885</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Phillips1">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Phillips</surname>
<given-names>WA</given-names>
</name>
</person-group>
<year>1974</year>
<article-title>On the distinction between sensory storage and short-term visual memory.</article-title>
<source>Percept Psychophys</source>
<volume>16</volume>
<fpage>283</fpage>
<lpage>290</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Yule1">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yule</surname>
<given-names>GU</given-names>
</name>
</person-group>
<year>1903</year>
<article-title>Notes on the theory of association of attributes in statistics.</article-title>
<source>Biometrika</source>
<volume>2</volume>
<fpage>121</fpage>
<lpage>134</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Ma4">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>W</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>No capacity limit in attentional tracking: Evidence for probabilistic inference under a resource constraint.</article-title>
<source>J Vision 9</source>
<volume>3</volume>
<fpage>1</fpage>
<lpage>30</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Nolte1">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nolte</surname>
<given-names>LW</given-names>
</name>
<name>
<surname>Jaarsma</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1966</year>
<article-title>More on the detection of one of
<italic>M</italic>
orthogonal signals.</article-title>
<source>J Acoust Soc Am</source>
<volume>41</volume>
<fpage>497</fpage>
<lpage>505</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Palmer1">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Palmer</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Verghese</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Pavel</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>The psychophysics of visual search.</article-title>
<source>Vision Research</source>
<volume>40</volume>
<fpage>1227</fpage>
<lpage>1268</lpage>
<pub-id pub-id-type="pmid">10788638</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Eckstein1">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eckstein</surname>
<given-names>MP</given-names>
</name>
<name>
<surname>Thomas</surname>
<given-names>JP</given-names>
</name>
<name>
<surname>Palmer</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Shimozaki</surname>
<given-names>SS</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays.</article-title>
<source>Percept Psychophys</source>
<volume>62</volume>
<fpage>425</fpage>
<lpage>451</lpage>
<pub-id pub-id-type="pmid">10909235</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Baldassi1">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baldassi</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Verghese</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Comparing integration rules in visual search.</article-title>
<source>J Vision</source>
<volume>2</volume>
<fpage>559</fpage>
<lpage>570</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Wilken1">
<label>33</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilken</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>A detection theory account of change detection.</article-title>
<source>J Vision</source>
<volume>4</volume>
<fpage>1120</fpage>
<lpage>1135</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-MacKay1">
<label>34</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>MacKay</surname>
<given-names>DJ</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Information theory, inference, and learning algorithms.</article-title>
<source>Cambridge, UK: Cambridge University Press</source>
</element-citation>
</ref>
<ref id="pone.0040216-Girshick1">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Girshick</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Cardinal rules: visual orientation perception reflects knowledge of environmental statistics.</article-title>
<source>Nat Neurosci</source>
<volume>14</volume>
<fpage>926</fpage>
<lpage>932</lpage>
<pub-id pub-id-type="pmid">21642976</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Natarajan1">
<label>36</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Natarajan</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Zemel</surname>
<given-names>RS</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Characterizing response behavior in multisensory perception with conflicting cues. Adv Neural Information Processing Systems 21.</article-title>
<source>Cambridge, MA: MIT Press</source>
</element-citation>
</ref>
<ref id="pone.0040216-Girshick2">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Girshick</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Probabilistic combination of slant information: weighted averaging and robustness as optimal percepts.</article-title>
<source>J Vision 9</source>
<volume>8</volume>
<fpage>1</fpage>
<lpage>20</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Seung1">
<label>38</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seung</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Sompolinsky</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1993</year>
<article-title>Simple model for reading neuronal population codes.</article-title>
<source>Proceedings of National Academy of Sciences USA</source>
<volume>90</volume>
<fpage>10749</fpage>
<lpage>10753</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Saproo1">
<label>39</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Saproo</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Serences</surname>
<given-names>JT</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Spatial attention improves the quality of population codes in human visual cortex.</article-title>
<source>J Neurophys</source>
<volume>104</volume>
<fpage>885</fpage>
<lpage>895</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Desimone1">
<label>40</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Desimone</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Duncan</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Neural mechanisms of selective visual attention.</article-title>
<source>Annual Review of Neuroscience</source>
<volume>18</volume>
<fpage>193</fpage>
<lpage>222</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Connor1">
<label>41</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Connor</surname>
<given-names>CE</given-names>
</name>
<name>
<surname>Gallant</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Preddie</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Van Essen</surname>
<given-names>DC</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Responses in area V4 depend on the spatial relationship between stimulus and attention.</article-title>
<source>J Neurophysiol</source>
<volume>75</volume>
<fpage>1306</fpage>
<lpage>1308</lpage>
<pub-id pub-id-type="pmid">8867139</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-McAdams1">
<label>42</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McAdams</surname>
<given-names>CJ</given-names>
</name>
<name>
<surname>Maunsell</surname>
<given-names>JH</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Effects of attention on the reliability of individual neurons in monkey visual cortex.</article-title>
<source>Neuron</source>
<volume>23</volume>
<fpage>765</fpage>
<lpage>773</lpage>
<pub-id pub-id-type="pmid">10482242</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Sadaghiani1">
<label>43</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sadaghiani</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Hesselmann</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Friston</surname>
<given-names>KJ</given-names>
</name>
<name>
<surname>Kleinschmidt</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>The relation of ongoing brain activity, evoked neural responses, and cognition.</article-title>
<source>Front Syst Neurosci</source>
<volume>4</volume>
<fpage>20</fpage>
<pub-id pub-id-type="pmid">20631840</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Sapir1">
<label>44</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sapir</surname>
<given-names>A</given-names>
</name>
<name>
<surname>d’Avossa</surname>
<given-names>G</given-names>
</name>
<name>
<surname>McAvoy</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Shulman</surname>
<given-names>GL</given-names>
</name>
<name>
<surname>Corbetta</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Brain signals for spatial attention predict performance in a motion discrimination task.</article-title>
<source>Proc Natl Acad Sci U S A</source>
<volume>102</volume>
<fpage>17810</fpage>
<lpage>17815</lpage>
<pub-id pub-id-type="pmid">16306268</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Reddy1">
<label>45</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reddy</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Quian Quiroga</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Wilken</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Koch</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Fried</surname>
<given-names>I</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>A single-neuron correlate of change detection and change blindness in the human medial temporal lobe Curr Biol</article-title>
<volume>2006</volume>
<fpage>20</fpage>
</element-citation>
</ref>
<ref id="pone.0040216-Eng1">
<label>46</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eng</surname>
<given-names>HY</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>Y</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Visual working memory for simple and complex visual stimuli.</article-title>
<source>Psychon B Rev</source>
<volume>12</volume>
<fpage>1127</fpage>
<lpage>1133</lpage>
</element-citation>
</ref>
<ref id="pone.0040216-Cowan1">
<label>47</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cowan</surname>
<given-names>N</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>The magical number 4 in short-term memory: a reconsideration of mental storage capacity.</article-title>
<source>Behav Brain Sci</source>
<volume>24</volume>
<fpage>87</fpage>
<lpage>114</lpage>
<pub-id pub-id-type="pmid">11515286</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Luck1">
<label>48</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Luck</surname>
<given-names>SJ</given-names>
</name>
<name>
<surname>Vogel</surname>
<given-names>EK</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>The capacity of visual working memory for features and conjunctions.</article-title>
<source>Nature</source>
<volume>390</volume>
<fpage>279</fpage>
<lpage>281</lpage>
<pub-id pub-id-type="pmid">9384378</pub-id>
</element-citation>
</ref>
<ref id="pone.0040216-Whiteley1">
<label>49</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Whiteley</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Sahani</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Implicit knowledge of visual uncertainty guides decisions with asymmetric outcomes.</article-title>
<source>J Vision</source>
<volume>8</volume>
<fpage>1</fpage>
<lpage>15</lpage>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002209 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002209 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3387023
   |texte=   Probabilistic Computation in Human Perception under Variability in Encoding Precision
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:22768258" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024