Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Optimality of Human Contour Integration

Identifieur interne : 002158 ( Pmc/Curation ); précédent : 002157; suivant : 002159

Optimality of Human Contour Integration

Auteurs : Udo A. Ernst [Allemagne] ; Sunita Mandon [Allemagne] ; Nadja Schinkel Ielefeld [Allemagne] ; Simon D. Neitzel [Allemagne] ; Andreas K. Kreiter [Allemagne] ; Klaus R. Pawelzik [Allemagne]

Source :

RBID : PMC:3360074

Abstract

For processing and segmenting visual scenes, the brain is required to combine a multitude of features and sensory channels. It is neither known if these complex tasks involve optimal integration of information, nor according to which objectives computations might be performed. Here, we investigate if optimal inference can explain contour integration in human subjects. We performed experiments where observers detected contours of curvilinearly aligned edge configurations embedded into randomly oriented distractors. The key feature of our framework is to use a generative process for creating the contours, for which it is possible to derive a class of ideal detection models. This allowed us to compare human detection for contours with different statistical properties to the corresponding ideal detection models for the same stimuli. We then subjected the detection models to realistic constraints and required them to reproduce human decisions for every stimulus as well as possible. By independently varying the four model parameters, we identify a single detection model which quantitatively captures all correlations of human decision behaviour for more than 2000 stimuli from 42 contour ensembles with greatly varying statistical properties. This model reveals specific interactions between edges closely matching independent findings from physiology and psychophysics. These interactions imply a statistics of contours for which edge stimuli are indeed optimally integrated by the visual system, with the objective of inferring the presence of contours in cluttered scenes. The recurrent algorithm of our model makes testable predictions about the temporal dynamics of neuronal populations engaged in contour integration, and it suggests a strong directionality of the underlying functional anatomy.


Url:
DOI: 10.1371/journal.pcbi.1002520
PubMed: 22654653
PubMed Central: 3360074

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3360074

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Optimality of Human Contour Integration</title>
<author>
<name sortKey="Ernst, Udo A" sort="Ernst, Udo A" uniqKey="Ernst U" first="Udo A." last="Ernst">Udo A. Ernst</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Mandon, Sunita" sort="Mandon, Sunita" uniqKey="Mandon S" first="Sunita" last="Mandon">Sunita Mandon</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Schinkel Ielefeld, Nadja" sort="Schinkel Ielefeld, Nadja" uniqKey="Schinkel Ielefeld N" first="Nadja" last="Schinkel Ielefeld">Nadja Schinkel Ielefeld</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Neitzel, Simon D" sort="Neitzel, Simon D" uniqKey="Neitzel S" first="Simon D." last="Neitzel">Simon D. Neitzel</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Kreiter, Andreas K" sort="Kreiter, Andreas K" uniqKey="Kreiter A" first="Andreas K." last="Kreiter">Andreas K. Kreiter</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Pawelzik, Klaus R" sort="Pawelzik, Klaus R" uniqKey="Pawelzik K" first="Klaus R." last="Pawelzik">Klaus R. Pawelzik</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22654653</idno>
<idno type="pmc">3360074</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3360074</idno>
<idno type="RBID">PMC:3360074</idno>
<idno type="doi">10.1371/journal.pcbi.1002520</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">002158</idno>
<idno type="wicri:Area/Pmc/Curation">002158</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Optimality of Human Contour Integration</title>
<author>
<name sortKey="Ernst, Udo A" sort="Ernst, Udo A" uniqKey="Ernst U" first="Udo A." last="Ernst">Udo A. Ernst</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Mandon, Sunita" sort="Mandon, Sunita" uniqKey="Mandon S" first="Sunita" last="Mandon">Sunita Mandon</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Schinkel Ielefeld, Nadja" sort="Schinkel Ielefeld, Nadja" uniqKey="Schinkel Ielefeld N" first="Nadja" last="Schinkel Ielefeld">Nadja Schinkel Ielefeld</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Neitzel, Simon D" sort="Neitzel, Simon D" uniqKey="Neitzel S" first="Simon D." last="Neitzel">Simon D. Neitzel</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Kreiter, Andreas K" sort="Kreiter, Andreas K" uniqKey="Kreiter A" first="Andreas K." last="Kreiter">Andreas K. Kreiter</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Pawelzik, Klaus R" sort="Pawelzik, Klaus R" uniqKey="Pawelzik K" first="Klaus R." last="Pawelzik">Klaus R. Pawelzik</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS Computational Biology</title>
<idno type="ISSN">1553-734X</idno>
<idno type="eISSN">1553-7358</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>For processing and segmenting visual scenes, the brain is required to combine a multitude of features and sensory channels. It is neither known if these complex tasks involve optimal integration of information, nor according to which objectives computations might be performed. Here, we investigate if optimal inference can explain contour integration in human subjects. We performed experiments where observers detected contours of curvilinearly aligned edge configurations embedded into randomly oriented distractors. The key feature of our framework is to use a generative process for creating the contours, for which it is possible to derive a class of ideal detection models. This allowed us to compare human detection for contours with different statistical properties to the corresponding ideal detection models for the same stimuli. We then subjected the detection models to realistic constraints and required them to reproduce human decisions for every stimulus as well as possible. By independently varying the four model parameters, we identify a single detection model which quantitatively captures all correlations of human decision behaviour for more than 2000 stimuli from 42 contour ensembles with greatly varying statistical properties. This model reveals specific interactions between edges closely matching independent findings from physiology and psychophysics. These interactions imply a statistics of contours for which edge stimuli are indeed optimally integrated by the visual system, with the objective of inferring the presence of contours in cluttered scenes. The recurrent algorithm of our model makes testable predictions about the temporal dynamics of neuronal populations engaged in contour integration, and it suggests a strong directionality of the underlying functional anatomy.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Hess, R" uniqKey="Hess R">R Hess</name>
</author>
<author>
<name sortKey="Field, D" uniqKey="Field D">D Field</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kovacs, I" uniqKey="Kovacs I">I Kovacs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Graham, N" uniqKey="Graham N">N Graham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Field, Dj" uniqKey="Field D">DJ Field</name>
</author>
<author>
<name sortKey="Hayes, A" uniqKey="Hayes A">A Hayes</name>
</author>
<author>
<name sortKey="Hess, Rf" uniqKey="Hess R">RF Hess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Strother, L" uniqKey="Strother L">L Strother</name>
</author>
<author>
<name sortKey="Kubovy, M" uniqKey="Kubovy M">M Kubovy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="May, K" uniqKey="May K">K May</name>
</author>
<author>
<name sortKey="Hess, R" uniqKey="Hess R">R Hess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dakin, S" uniqKey="Dakin S">S Dakin</name>
</author>
<author>
<name sortKey="Hess, R" uniqKey="Hess R">R Hess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wertheimer, M" uniqKey="Wertheimer M">M Wertheimer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koffka, K" uniqKey="Koffka K">K Koffka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Helmholtz, H" uniqKey="Von Helmholtz H">H von Helmholtz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Körding</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Williams, L" uniqKey="Williams L">L Williams</name>
</author>
<author>
<name sortKey="Thornber, K" uniqKey="Thornber K">K Thornber</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mumford, D" uniqKey="Mumford D">D Mumford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geisler, W" uniqKey="Geisler W">W Geisler</name>
</author>
<author>
<name sortKey="Perry, J" uniqKey="Perry J">J Perry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geisler, W" uniqKey="Geisler W">W Geisler</name>
</author>
<author>
<name sortKey="Perry, J" uniqKey="Perry J">J Perry</name>
</author>
<author>
<name sortKey="Super, B" uniqKey="Super B">B Super</name>
</author>
<author>
<name sortKey="Gallogly, D" uniqKey="Gallogly D">D Gallogly</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grigorescu, C" uniqKey="Grigorescu C">C Grigorescu</name>
</author>
<author>
<name sortKey="Petkov, N" uniqKey="Petkov N">N Petkov</name>
</author>
<author>
<name sortKey="Westenberg, Ma" uniqKey="Westenberg M">MA Westenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martin, D" uniqKey="Martin D">D Martin</name>
</author>
<author>
<name sortKey="Fowlkes, C" uniqKey="Fowlkes C">C Fowlkes</name>
</author>
<author>
<name sortKey="Tal, D" uniqKey="Tal D">D Tal</name>
</author>
<author>
<name sortKey="Malik, J" uniqKey="Malik J">J Malik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parent, P" uniqKey="Parent P">P Parent</name>
</author>
<author>
<name sortKey="Zucker, S" uniqKey="Zucker S">S Zucker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nugent, Ak" uniqKey="Nugent A">AK Nugent</name>
</author>
<author>
<name sortKey="Keswani, Rn" uniqKey="Keswani R">RN Keswani</name>
</author>
<author>
<name sortKey="Woods, Rl" uniqKey="Woods R">RL Woods</name>
</author>
<author>
<name sortKey="Peli, E" uniqKey="Peli E">E Peli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lovell, P" uniqKey="Lovell P">P Lovell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hess, Rf" uniqKey="Hess R">RF Hess</name>
</author>
<author>
<name sortKey="Dakin, Sc" uniqKey="Dakin S">SC Dakin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kapadia, Mk" uniqKey="Kapadia M">MK Kapadia</name>
</author>
<author>
<name sortKey="Westheimer, G" uniqKey="Westheimer G">G Westheimer</name>
</author>
<author>
<name sortKey="Gilbert, Cd" uniqKey="Gilbert C">CD Gilbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Foley, Jm" uniqKey="Foley J">JM Foley</name>
</author>
<author>
<name sortKey="Varadharajan, S" uniqKey="Varadharajan S">S Varadharajan</name>
</author>
<author>
<name sortKey="Koh, Cc" uniqKey="Koh C">CC Koh</name>
</author>
<author>
<name sortKey="Farias, Mcq" uniqKey="Farias M">MCQ Farias</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cowey, A" uniqKey="Cowey A">A Cowey</name>
</author>
<author>
<name sortKey="Rolls, E" uniqKey="Rolls E">E Rolls</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schinkel, N" uniqKey="Schinkel N">N Schinkel</name>
</author>
<author>
<name sortKey="Pawelzik, Kr" uniqKey="Pawelzik K">KR Pawelzik</name>
</author>
<author>
<name sortKey="Ernst, Ua" uniqKey="Ernst U">UA Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schinkel Bielefeld, N" uniqKey="Schinkel Bielefeld N">N Schinkel-Bielefeld</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hansen, A" uniqKey="Hansen A">A Hansen</name>
</author>
<author>
<name sortKey="Neumann, H" uniqKey="Neumann H">H Neumann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, Z" uniqKey="Li Z">Z Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mandon, S" uniqKey="Mandon S">S Mandon</name>
</author>
<author>
<name sortKey="Kreiter, Ak" uniqKey="Kreiter A">AK Kreiter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mathes, B" uniqKey="Mathes B">B Mathes</name>
</author>
<author>
<name sortKey="Fahle, M" uniqKey="Fahle M">M Fahle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beaudot, Wha" uniqKey="Beaudot W">WHA Beaudot</name>
</author>
<author>
<name sortKey="Mullen, Kt" uniqKey="Mullen K">KT Mullen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beaudot, Wha" uniqKey="Beaudot W">WHA Beaudot</name>
</author>
<author>
<name sortKey="Mullen, Kt" uniqKey="Mullen K">KT Mullen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Persike, M" uniqKey="Persike M">M Persike</name>
</author>
<author>
<name sortKey="Olzak, L" uniqKey="Olzak L">L Olzak</name>
</author>
<author>
<name sortKey="Meinhardt, G" uniqKey="Meinhardt G">G Meinhardt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Persike, M" uniqKey="Persike M">M Persike</name>
</author>
<author>
<name sortKey="Meinhardt, G" uniqKey="Meinhardt G">G Meinhardt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Braun, J" uniqKey="Braun J">J Braun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bosking, Wh" uniqKey="Bosking W">WH Bosking</name>
</author>
<author>
<name sortKey="Zhang, Y" uniqKey="Zhang Y">Y Zhang</name>
</author>
<author>
<name sortKey="Schofield, B" uniqKey="Schofield B">B Schofield</name>
</author>
<author>
<name sortKey="Fitzpatrick, D" uniqKey="Fitzpatrick D">D Fitzpatrick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chisum, Hj" uniqKey="Chisum H">HJ Chisum</name>
</author>
<author>
<name sortKey="Mooser, F" uniqKey="Mooser F">F Mooser</name>
</author>
<author>
<name sortKey="Fitzpatrick, D" uniqKey="Fitzpatrick D">D Fitzpatrick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stettler, D" uniqKey="Stettler D">D Stettler</name>
</author>
<author>
<name sortKey="Das, A" uniqKey="Das A">A Das</name>
</author>
<author>
<name sortKey="Bennett, J" uniqKey="Bennett J">J Bennett</name>
</author>
<author>
<name sortKey="Gilbert, C" uniqKey="Gilbert C">C Gilbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shmuel, A" uniqKey="Shmuel A">A Shmuel</name>
</author>
<author>
<name sortKey="Korman, M" uniqKey="Korman M">M Korman</name>
</author>
<author>
<name sortKey="Sterkin, A" uniqKey="Sterkin A">A Sterkin</name>
</author>
<author>
<name sortKey="Harel, M" uniqKey="Harel M">M Harel</name>
</author>
<author>
<name sortKey="Ullman, S" uniqKey="Ullman S">S Ullman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sillito, A" uniqKey="Sillito A">A Sillito</name>
</author>
<author>
<name sortKey="Grieve, K" uniqKey="Grieve K">K Grieve</name>
</author>
<author>
<name sortKey="Jones, H" uniqKey="Jones H">H Jones</name>
</author>
<author>
<name sortKey="Cudeiro, J" uniqKey="Cudeiro J">J Cudeiro</name>
</author>
<author>
<name sortKey="Davis, J" uniqKey="Davis J">J Davis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levitt, Jb" uniqKey="Levitt J">JB Levitt</name>
</author>
<author>
<name sortKey="Lund, Js" uniqKey="Lund J">JS Lund</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kapadia, M" uniqKey="Kapadia M">M Kapadia</name>
</author>
<author>
<name sortKey="Ito, M" uniqKey="Ito M">M Ito</name>
</author>
<author>
<name sortKey="Gilbert, C" uniqKey="Gilbert C">C Gilbert</name>
</author>
<author>
<name sortKey="Westheimer, G" uniqKey="Westheimer G">G Westheimer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, W" uniqKey="Li W">W Li</name>
</author>
<author>
<name sortKey="Piech, V" uniqKey="Piech V">V Piëch</name>
</author>
<author>
<name sortKey="Gilbert, Cd" uniqKey="Gilbert C">CD Gilbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, W" uniqKey="Li W">W Li</name>
</author>
<author>
<name sortKey="Piech, V" uniqKey="Piech V">V Piëch</name>
</author>
<author>
<name sortKey="Gilbert, Cd" uniqKey="Gilbert C">CD Gilbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maertens, M" uniqKey="Maertens M">M Maertens</name>
</author>
<author>
<name sortKey="Pollmann, S" uniqKey="Pollmann S">S Pollmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Polat, U" uniqKey="Polat U">U Polat</name>
</author>
<author>
<name sortKey="Mizobe, K" uniqKey="Mizobe K">K Mizobe</name>
</author>
<author>
<name sortKey="Pettet, Mw" uniqKey="Pettet M">MW Pettet</name>
</author>
<author>
<name sortKey="Kasamatsu, T" uniqKey="Kasamatsu T">T Kasamatsu</name>
</author>
<author>
<name sortKey="Norcia, Am" uniqKey="Norcia A">AM Norcia</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Walker, Ga" uniqKey="Walker G">GA Walker</name>
</author>
<author>
<name sortKey="Ohzawa, I" uniqKey="Ohzawa I">I Ohzawa</name>
</author>
<author>
<name sortKey="Freeman, Rd" uniqKey="Freeman R">RD Freeman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Abeles, M" uniqKey="Abeles M">M Abeles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcmanus, J" uniqKey="Mcmanus J">J McManus</name>
</author>
<author>
<name sortKey="Li, W" uniqKey="Li W">W Li</name>
</author>
<author>
<name sortKey="Gilbert, C" uniqKey="Gilbert C">C Gilbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grossberg, S" uniqKey="Grossberg S">S Grossberg</name>
</author>
<author>
<name sortKey="Mingolla, E" uniqKey="Mingolla E">E Mingolla</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carandini, M" uniqKey="Carandini M">M Carandini</name>
</author>
<author>
<name sortKey="Heeger, D" uniqKey="Heeger D">D Heeger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Murphy, B" uniqKey="Murphy B">B Murphy</name>
</author>
<author>
<name sortKey="Miller, K" uniqKey="Miller K">K Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salinas, E" uniqKey="Salinas E">E Salinas</name>
</author>
<author>
<name sortKey="Abbott, L" uniqKey="Abbott L">L Abbott</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS Comput Biol</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS Comput. Biol</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">ploscomp</journal-id>
<journal-title-group>
<journal-title>PLoS Computational Biology</journal-title>
</journal-title-group>
<issn pub-type="ppub">1553-734X</issn>
<issn pub-type="epub">1553-7358</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22654653</article-id>
<article-id pub-id-type="pmc">3360074</article-id>
<article-id pub-id-type="publisher-id">PCOMPBIOL-D-11-01957</article-id>
<article-id pub-id-type="doi">10.1371/journal.pcbi.1002520</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Computational Neuroscience</subject>
<subj-group>
<subject>Coding Mechanisms</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Neural Networks</subject>
<subject>Sensory Systems</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Optimality of Human Contour Integration</article-title>
<alt-title alt-title-type="running-head">Optimality of Human Contour Integration</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Ernst</surname>
<given-names>Udo A.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Mandon</surname>
<given-names>Sunita</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Schinkel–Bielefeld</surname>
<given-names>Nadja</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn1">
<sup>¤</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Neitzel</surname>
<given-names>Simon D.</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kreiter</surname>
<given-names>Andreas K.</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pawelzik</surname>
<given-names>Klaus R.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Department of Neurophysics, Institute for Theoretical Physics, University of Bremen, Bremen, Germany</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Department of Theoretical Neurobiology, Institute for Brain Research, University of Bremen, Bremen, Germany</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Sporns</surname>
<given-names>Olaf</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">Indiana University, United States of America</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>udo@neuro.uni-bremen.de</email>
</corresp>
<fn id="fn1" fn-type="current-aff">
<p>
<bold>¤:</bold>
Current address: Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany</p>
</fn>
<fn fn-type="con">
<p>Conceived and designed the experiments: UAE SM SDN AKK KRP. Performed the experiments: SM SDN. Analyzed the data: UAE. Wrote the paper: UAE SM NSB KRP.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<month>5</month>
<year>2012</year>
</pub-date>
<pmc-comment> Fake ppub added to accomodate plos workflow change from 03/2008 and 03/2009 </pmc-comment>
<pub-date pub-type="ppub">
<month>5</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="epub">
<day>24</day>
<month>5</month>
<year>2012</year>
</pub-date>
<volume>8</volume>
<issue>5</issue>
<elocation-id>e1002520</elocation-id>
<history>
<date date-type="received">
<day>23</day>
<month>12</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>31</day>
<month>3</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Ernst et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
<copyright-year>2012</copyright-year>
</permissions>
<abstract>
<p>For processing and segmenting visual scenes, the brain is required to combine a multitude of features and sensory channels. It is neither known if these complex tasks involve optimal integration of information, nor according to which objectives computations might be performed. Here, we investigate if optimal inference can explain contour integration in human subjects. We performed experiments where observers detected contours of curvilinearly aligned edge configurations embedded into randomly oriented distractors. The key feature of our framework is to use a generative process for creating the contours, for which it is possible to derive a class of ideal detection models. This allowed us to compare human detection for contours with different statistical properties to the corresponding ideal detection models for the same stimuli. We then subjected the detection models to realistic constraints and required them to reproduce human decisions for every stimulus as well as possible. By independently varying the four model parameters, we identify a single detection model which quantitatively captures all correlations of human decision behaviour for more than 2000 stimuli from 42 contour ensembles with greatly varying statistical properties. This model reveals specific interactions between edges closely matching independent findings from physiology and psychophysics. These interactions imply a statistics of contours for which edge stimuli are indeed optimally integrated by the visual system, with the objective of inferring the presence of contours in cluttered scenes. The recurrent algorithm of our model makes testable predictions about the temporal dynamics of neuronal populations engaged in contour integration, and it suggests a strong directionality of the underlying functional anatomy.</p>
</abstract>
<abstract abstract-type="summary">
<title>Author Summary</title>
<p>Since Helmholtz put forward his concept that the brain performs inference on its sensory input for building an internal representation of the outside world, it is a puzzle for neuroscientific research whether visual perception can indeed be understood from first principles. An important part of vision is the integration of colinearly aligned edge elements into contours, which is required for the detection of object boundaries. We show that this visual function can fully be explained in a probabilistic model with a well–defined statistical objective. For this purpose, we developed a novel method to adapt models to correlations in human behaviour, and applied this technique to tightly link psychophysical experiments and numerical simulations of contour integration. The results not only demonstrate that complex neuronal computations can be elegantly described in terms of constrained probabilistic inference, but also reveal yet unknown neural mechanisms underlying early visual information processing.</p>
</abstract>
<counts>
<page-count count="17"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>The human's analysis and perception of complex natural scenes under greatly varying environmental conditions is robust and rapid. This remarkable ability of our brain relies on various interacting processes which can be assumed to build representations of visual objects from the information contained in localized image patches. A very elementary process in this context is contour integration, where sets of colinearly aligned line segments or edge elements are merged into coherent percepts of contours.</p>
<p>Contour integration is useful for identifying boundaries of potential objects in a visual scene, and therefore important for performing image segmentation and object recognition. Humans and primates are remarkably efficient in integrating contours even if the edges of a contour are not perfectly aligned or if parts of the contour are occluded by other image components. Thus uncovering theoretical principles and neural mechanisms underlying contour integration is an important step towards understanding visual information processing in the brain
<xref ref-type="bibr" rid="pcbi.1002520-Hess1">[1]</xref>
<xref ref-type="bibr" rid="pcbi.1002520-Graham1">[3]</xref>
.</p>
<p>Psychophysical studies have investigated the impact of various stimulus parameters on contour integration. For example, they quantified how contour integration performance depends on contour curvature
<xref ref-type="bibr" rid="pcbi.1002520-Field1">[4]</xref>
, on the distance between consecutive contour elements
<xref ref-type="bibr" rid="pcbi.1002520-Strother1">[5]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-May1">[6]</xref>
, on the deviation from a perfect alignment of the oriented elements to the contour path
<xref ref-type="bibr" rid="pcbi.1002520-Field1">[4]</xref>
, or on the spatial frequency of the elements
<xref ref-type="bibr" rid="pcbi.1002520-Dakin1">[7]</xref>
.</p>
<p>The first attempt to put such observations into a coherent framework was made by a group of psychologists
<xref ref-type="bibr" rid="pcbi.1002520-Wertheimer1">[8]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Koffka1">[9]</xref>
. They formulated the Gestalt laws for describing the principles according to which the visual system groups local image features into coherent percepts. The corresponding principle for contour integration is termed the ‘law of good continuation’, stating that line segments which are aligned colinearily or curvilinearly are bound together. This idea was later formalized by introducing the ‘association field’ (AF)
<xref ref-type="bibr" rid="pcbi.1002520-Field1">[4]</xref>
, which specifies how strongly the visual system associates two line segments with a particular configuration of positions and orientations as belonging to one contour.</p>
<p>Ideally, a theory of contour integration should predict perceptual behaviour for arbitrary configurations of oriented image patches. Here, we explore if an approach based on ‘generative models’ can quantitatively predict human contour detection. Generative models derive from the classical perspective that considers perception as inference
<xref ref-type="bibr" rid="pcbi.1002520-vonHelmholtz1">[10]</xref>
. They are statistical models specifying how a stimulus might be generated from the presence or absence of particular elementary causes or objects in a scene. Knowing the generative process enables an observer (i.e., the brain) to perform inference on such a stimulus.</p>
<p>In the context of visual perception, this perspective has recently shown to be useful for understanding and modeling multisensory cue integration
<xref ref-type="bibr" rid="pcbi.1002520-Ernst1">[11]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Krding1">[12]</xref>
. In these investigations the objective of perception was specified by a particular task which essentially requires computations on only two sensory cues. In comparison, contour integration is far more sophisticated, performed on many sensory variables in parallel, and according to an objective which is yet not known in a quantitative, mathematical sense. A promising conceptual idea for closing this gap is given by the observation that the association field can be reinterpreted as a conditional link probability between two oriented line segments
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Mumford1">[14]</xref>
. This interpretation can be used to define a contour generation process that relies on similar conditional probabilities. Formulated as a generative model for contours, it yields a specific statistics of stimuli comprising oriented line segments. By inversion of this generative process, contour integration is now reduced to an optimal inference problem, namely the computation of the probabilities for an element to belong to a contour. A thorough formalization of this idea was performed by Williams and Thornber
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
who used it to explain certain visual illusions.</p>
<p>The present work pursues an integrative approach linking theory, modeling and psychophysical experiments. It aims at explaining human contour integration and decision behaviour as optimal inference in a mathematically exact and quantitative manner (see
<xref ref-type="fig" rid="pcbi-1002520-g001">
<bold>Fig. 1</bold>
</xref>
). By extending the theoretical framework of Williams and Thornber
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
, we define a class of generative models for contour integration from which we construct mathematically well–defined ensembles of test stimuli for psychophysical contour detection experiments. Using behavioral data collected from five human observers, we subsequently identify the parameters of the generative model which most closely explains human decisions for each stimulus. We find that these parameters match the findings from previous empirical work. An extensive statistical analysis reveals that the best–matching model reproduces practically all systematic behavior among our subjects. From the particular structure and dynamics of our model, we derive predictions about putative neural mechanisms realizing probabilistic contour integration in the brain. Finally, we discuss these findings in comparison with physiological and anatomical evidence from visual cortex.</p>
<fig id="pcbi-1002520-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002520.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Framework for combining theory, modeling and psychophysics to study contour integration.</title>
<p>Upper row, contour creation: A contour is created either on the left or right hemifield of a computer screen by a Markov random process using a suitably defined association field (AF, in brackets) for specifying the transition probabilities. Adding randomly oriented, similarly spaced background elements effectively hides the contour and completes a stimulus. Lower left column, contour integration: The ideal algorithm for contour integration uses knowledge about the generating process (i.e. the same AF as used in generating the contours, in brackets), to perform inference on a stimulus. For each edge, it computes the probability of being the first (or last) element of a contour created by the generating Markov process. The likely position of a contour is finally determined by maximum-likelihood estimation on the sum of these probabilities for each hemifield. Lower middle and right column, comparison to humans and probabilistic models: In our paradigm, the ideal contour observer serves as a benchmark for human contour detection, which is probed using the same stimuli under time constraints. At the same time, the inference algorithm of the ideal observer suggests a class of probabilistic contour integration models in which we search for the optimal model which best explains human behavior and performance. Note that ‘optimal’ does not mean that the contour integration model strives for an optimal contour detection performance: it should also make the same errors as human observers, as in this illustrative example, where a shorter ‘chance’ contour in the background is judged more salient by the human subject.</p>
</caption>
<graphic xlink:href="pcbi.1002520.g001"></graphic>
</fig>
</sec>
<sec id="s2">
<title>Results</title>
<sec id="s2a">
<title>A generative model of contour creation and integration</title>
<p>The statistics of contours in natural images is highly complex, and there is no complete description that could be taken as a starting point for a modelling study. The best information available is from studies with human observers who were instructed to redraw contours in a set of natural images
<xref ref-type="bibr" rid="pcbi.1002520-Geisler1">[15]</xref>
<xref ref-type="bibr" rid="pcbi.1002520-Martin1">[18]</xref>
. However, this statistics was only extracted for pairwise edge configurations, and is only available as a tabulation and not in a closed–form expression. We instead chose to employ the probabilistic framework by Williams and Thornber
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
and defined contours as being generated by a Markov random process (details in
<xref ref-type="sec" rid="s4">Methods</xref>
section): To create a contour of length
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e001.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, one places its first edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e002.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with random angle
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e003.jpg" mimetype="image"></inline-graphic>
</inline-formula>
at a random position (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e004.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). The second edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e005.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the contour is then placed by randomly drawing its position and angle from a conditional probability density
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e006.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with (as yet unspecified) parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e007.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This process is iterated until the final,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e008.jpg" mimetype="image"></inline-graphic>
</inline-formula>
-th edge has been placed. Note that we actually define the parameter
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e009.jpg" mimetype="image"></inline-graphic>
</inline-formula>
as a direction extending over the full circle
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e010.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, rather than representing an orientation only. This definition is necessary for constraining contour creation to proceed along a chosen, general direction. It prevents the creation process from turning around by 180 degrees when placing successive edge elements. For a more elaborate justification and discussion of this property, we refer to
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
. Note that we would also like to understand the term ‘edge’ in a more general sense as any realization of an image patch, which is
<italic>localized</italic>
at a position
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e011.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and has an
<italic>orientation</italic>
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e012.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This definition encompasses line segments and luminance borders, as well as the Gabor patches which we used for rendering the stimulus configurations generated by our probabilistic model.</p>
<p>Reasonable choices of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e013.jpg" mimetype="image"></inline-graphic>
</inline-formula>
promoting features like colinearity and cocircularity
<xref ref-type="bibr" rid="pcbi.1002520-Parent1">[19]</xref>
(
<xref ref-type="fig" rid="pcbi-1002520-g002">
<bold>Fig. 2</bold>
A,B</xref>
), yield contour samples which look quite ‘natural’ (
<xref ref-type="fig" rid="pcbi-1002520-g003">
<bold>Fig. 3</bold>
</xref>
). In particular, these samples are perceived by humans as contours and are salient when hidden among distracting elements. The probability density
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e014.jpg" mimetype="image"></inline-graphic>
</inline-formula>
may be identified with the AF
<xref ref-type="bibr" rid="pcbi.1002520-Field1">[4]</xref>
which is commonly used in psychophysical literature to quantify how strongly a given configuration of two edges provides evidence for the presence of a contour. Given the distribution of contour elements and their total number are known a priori, the properties of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e015.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, parametrized by
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e016.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, fully define the contour statistics.</p>
<fig id="pcbi-1002520-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002520.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Parameters and geometry of association field, and eccentricity scaling.</title>
<p>(A) Geometrical relation between edges
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e017.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e018.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and their relative coordinates
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e019.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e020.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e021.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The red arrow indicates the direction edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e022.jpg" mimetype="image"></inline-graphic>
</inline-formula>
should have for a perfect co-circular continuation of a contour through
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e023.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e024.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Conditional link probability
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e025.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(the ‘association field’) depends on the deviation of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e026.jpg" mimetype="image"></inline-graphic>
</inline-formula>
from this direction with a scale of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e027.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(in red). In addition, link probability also depends on the difference
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e028.jpg" mimetype="image"></inline-graphic>
</inline-formula>
between the directions of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e029.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e030.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on a length scale
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e031.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(in green). (B) The association field
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e032.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is defined as a product of a radial part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e033.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and an angular part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e034.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(see
<xref ref-type="sec" rid="s4">Methods</xref>
). The starting edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e035.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with direction
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e036.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is shown as the blue arrow in the center of the coordinate systems. Left, the radial part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e037.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is shown in dependence on the distances
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e038.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e039.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to the destination edge. Center, the angular part averaged over all destination directions
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e040.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Right, the product of the distributions in the left and center graphs. Grey scale is proportional to link probability, normalized to 1 (darker shades indicate higher values). Parameters of all sketches are taken from optimal model which was fit to explain the psychophysical data. (C) Edge salience
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e041.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in dependence on edge eccentricity
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e042.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e043.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e044.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Left, for a constant
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e045.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the parameter
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e046.jpg" mimetype="image"></inline-graphic>
</inline-formula>
controls the slope (black,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e047.jpg" mimetype="image"></inline-graphic>
</inline-formula>
; blue,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e048.jpg" mimetype="image"></inline-graphic>
</inline-formula>
; red,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e049.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Center, for a constant
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e050.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the parameter
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e051.jpg" mimetype="image"></inline-graphic>
</inline-formula>
controls the concavity (black,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e052.jpg" mimetype="image"></inline-graphic>
</inline-formula>
; blue,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e053.jpg" mimetype="image"></inline-graphic>
</inline-formula>
; red,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e054.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Right, the scaling
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e055.jpg" mimetype="image"></inline-graphic>
</inline-formula>
obtained by fitting the probabilistic contour integration model to human behavior.</p>
</caption>
<graphic xlink:href="pcbi.1002520.g002"></graphic>
</fig>
<fig id="pcbi-1002520-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002520.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Contour detection paradigm and stimulus parameters.</title>
<p>(A) Each trial started with the appearance of a fixation point. Subsequently, a stimulus was presented with a contour hidden in the left or right hemifield of the screen. This stimulus was masked after a time
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e056.jpg" mimetype="image"></inline-graphic>
</inline-formula>
after stimulus onset (SOA). The mask consisted of edge elements at the same positions, but with random orientations. Edge elements were rendered as Gabor patches with random phases. For better visibility, the size of the Gabors was scaled by a factor of two in this illustration. (B) Sample section of a different stimulus with a straight, but jittered contour of 10 elments, smallest mean edge distance. (C) Sample section of a stimulus with the largest used edge distance and a contour of 4 elements. In all panels, the location of the contour is indicated by white arrows, which were absent in the real experiment.</p>
</caption>
<graphic xlink:href="pcbi.1002520.g003"></graphic>
</fig>
<p>This probabilistic framework for contour generation not only provides contours with a well-defined statistics, but also implies an ideal model for contour detection: Suppose that the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e057.jpg" mimetype="image"></inline-graphic>
</inline-formula>
contour edges are hidden in a field of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e058.jpg" mimetype="image"></inline-graphic>
</inline-formula>
randomly oriented distractor edges. Given the generating AF
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e059.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was known, one can now compute the likelihoods
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e060.jpg" mimetype="image"></inline-graphic>
</inline-formula>
that edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e061.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the starting edge of a contour with length
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e062.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This is done by first constructing a matrix
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e063.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the pairwise association probabilities for all edges combinations (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e064.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e065.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) by sampling from
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e066.jpg" mimetype="image"></inline-graphic>
</inline-formula>
via
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e067.jpg" mimetype="image"></inline-graphic>
</inline-formula>
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e068.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The likelihoods
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e069.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are then given by an ordinary matrix multiplication (with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e070.jpg" mimetype="image"></inline-graphic>
</inline-formula>
denoting the matrix element from the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e071.jpg" mimetype="image"></inline-graphic>
</inline-formula>
row and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e072.jpg" mimetype="image"></inline-graphic>
</inline-formula>
column from the expression inside the square brackets),
<disp-formula>
<graphic xlink:href="pcbi.1002520.e073"></graphic>
<label>(1)</label>
</disp-formula>
Here we adapted the basic framework from
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
to contours of finite length which consist of a discrete set of elements. With few modifications it is equally possible to handle continuous contours, or to integrate closed contours
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
. Note, that this algorithm computes the true likelihoods only if the assumptions about the underlying process are correct. In other words, it only then realizes an ideal observer when contour integration is performed with the same parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e074.jpg" mimetype="image"></inline-graphic>
</inline-formula>
that were used for contour generation. When applied with deviating assumptions it may still be used to perform approximate inference, which, however, would be prone to systematic misestimations.</p>
<p>The position of the contour can be estimated from the location (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e075.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) of the edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e076.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with the largest likelihood of having been the starting edge of the contour (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e077.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, maximum likelihood estimator). By eqn. (1), all possible contour paths are exploited in parallel. It is related to iterative Bayesian estimation in the sense that each matrix multiplication iterates a prior that contains the starting edge likelihoods for contours with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e078.jpg" mimetype="image"></inline-graphic>
</inline-formula>
elements into a posterior that contains the likelihoods for
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e079.jpg" mimetype="image"></inline-graphic>
</inline-formula>
contours. In total,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e080.jpg" mimetype="image"></inline-graphic>
</inline-formula>
iterations are required when looking for contours of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e081.jpg" mimetype="image"></inline-graphic>
</inline-formula>
edges.</p>
<p>In the context of two–alternative forced choice (2-AFC) experiments it is less interesting to determine the precise position of a contour than to infer which one of two different stimuli
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e082.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e083.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is more likely to contain a contour. Given the corresponding matrices
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e084.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e085.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for stimulus configurations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e086.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e087.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, respectively, one first computes
<disp-formula>
<graphic xlink:href="pcbi.1002520.e088"></graphic>
<label>(2)</label>
</disp-formula>
and then compares which of the likelihoods
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e089.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e090.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is larger.</p>
<p>In addition, we introduce a scaling factor
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e091.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for each edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e092.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This factor is used to cover situations in which edge elements have different degrees of visibility.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e093.jpg" mimetype="image"></inline-graphic>
</inline-formula>
would be low if, for example, an edge is fuzzy, has a low contrast, or posseses no certain orientation, like at the border of a cloud in a natural scene. In a probabilistic framework
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e094.jpg" mimetype="image"></inline-graphic>
</inline-formula>
can be interpreted as a likelihood for the presence of an edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e095.jpg" mimetype="image"></inline-graphic>
</inline-formula>
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
, which modifies the original contour integration algorithm eqn. (1),
<disp-formula>
<graphic xlink:href="pcbi.1002520.e096"></graphic>
<label>(3)</label>
</disp-formula>
Also
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e097.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is part of the generative model and thus might depend on its parametrization
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e098.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e099.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>In our paradigm, the objective for contour integration is to infer the likelihoods for (starting) edges being part of a contour, taking the observation of their orientations and positions as available evidence. By exploiting all knowledge about the statistical nature of contours contained in the AF, eqn. (3) realizes an ideal contour observer which performs iterative Bayesian estimation on the evidence provided by the edges in a stimulus. This ideal observer not only serves us as a benchmark for humans and models performing the given task: by assuming that human contour integration follows a similar objective, eqn. (3) describes a suitable probabilistic model class which we can require, by means of fitting their parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e100.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, to reproduce human behavior as well as possible.</p>
</sec>
<sec id="s2b">
<title>Psychophysical contour detection experiments</title>
<p>We performed psychophysical experiments using the stimuli generated by our models. While our paradigm is similar to previous studies, our approach is conceptually different: we used a precise mathematical definition of edge configurations for generating contour stimuli, providing us with ideal observer models for integrating these contours. These models then served us as a benchmark for both, average human performance and individual human decision behaviour.</p>
<p>In a 2–AFC paradigm human subjects had to detect a contour which had been placed either into the left or into the right hemifield of a computer screen (
<xref ref-type="fig" rid="pcbi-1002520-g003">
<bold>Fig. 3</bold>
A</xref>
). The contour was hidden among randomly oriented distractors, which had been placed such that the only information left about the location of the contour was in the relative alignment of the contour's edges. Since we do not know a priori which exact parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e101.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for our association field are best suited to match human contour integration, we choose different combinations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e102.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to systematically probe human behavior and to vary the difficulty of the task. In particular, we varied alignment of edges and curvature of the contours (
<xref ref-type="fig" rid="pcbi-1002520-g003">
<bold>Fig. 3</bold>
B</xref>
) by changing the length scales of the AF
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e103.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e104.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, respectively (see
<xref ref-type="sec" rid="s4">Methods</xref>
). In addition we varied mean inter-edge distance from 1.2 to 3.6 degrees of visual angle while holding the spatial extension of the contour constant, resulting in contours from
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e105.jpg" mimetype="image"></inline-graphic>
</inline-formula>
down to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e106.jpg" mimetype="image"></inline-graphic>
</inline-formula>
edges, respectively (
<xref ref-type="fig" rid="pcbi-1002520-g003">
<bold>Fig. 3</bold>
C</xref>
). For studying the temporal dynamics of contour integration, all stimuli were shown for varying time periods of 20, 30, 60, 100 and 200 ms. After this period (stimulus-onset asynchrony, SOA), masks were presented which consisted of edge elements located at the same positions, but with randomly assigned orientations. Stimuli from AFs with different underlying parameter sets
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e107.jpg" mimetype="image"></inline-graphic>
</inline-formula>
were presented in random order, which varied for each subject (for details of all procedures, see
<xref ref-type="sec" rid="s4">Methods</xref>
section).</p>
<p>
<xref ref-type="fig" rid="pcbi-1002520-g004">
<bold>Fig. 4</bold>
A</xref>
and
<xref ref-type="fig" rid="pcbi-1002520-g005">
<bold>Fig. 5</bold>
A,B</xref>
summarize our experimental findings. We first focus on the data for an SOA of 200 ms (
<xref ref-type="fig" rid="pcbi-1002520-g004">
<bold>Fig. 4</bold>
A</xref>
): As expected, and in accordance with previous investigations
<xref ref-type="bibr" rid="pcbi.1002520-Field1">[4]</xref>
, contours were more difficult to detect if edge alignment is subject to jitter and if contour curvature increases. Performance also decreased for increasing edge distances, but less strongly for straight contours. For such contours, almost perfect contour integration was still possible with inter-element distances up to 2.7 degrees of visual angle.
<xref ref-type="fig" rid="pcbi-1002520-g005">
<bold>Fig. 5</bold>
A,B</xref>
shows how contour detection performance improves with an increasing SOA between target and mask. Again, lower performance is observed for higher jitters or larger element distances, but the general shape of the curves is very similar. Surprisingly, we found considerable detection performances also when SOAs were as low as 20 ms. These results suggest that contour integration is a very fast process requiring long-ranging interactions between orientation detectors in visual cortex.</p>
<fig id="pcbi-1002520-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002520.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Contour detection performances.</title>
<p>Comparison of contour detection performance in percent correct (A) for human observers to (B) the ideal and the optimal models. The performances are shown in dependence on inter–element distance (i.e., total number of edges in a contour) and on the alignment parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e108.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the AF (color legend as inset to (B)). The psychophysical data for an SOA of 200 ms in (A) was averaged over 5 human observers, with the vertical bars denoting standard errors. In (B), model performances (ideal model: crosses, optimal model: open circles) for all contour ensembles (i.e., all jitters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e109.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and contour lengths
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e110.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) are plotted against the corresponding human performances. In this scatter plot, all points above the solid line indicate model performance being above human performance. Detection performance for the optimal model was averaged over 5000 samples from each contour ensemble, instead of using only 48 samples as in the actual experiment.</p>
</caption>
<graphic xlink:href="pcbi.1002520.g004"></graphic>
</fig>
<fig id="pcbi-1002520-g005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002520.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Temporal aspects of contour detection.</title>
<p>Psychophysical contour detection performances in dependence on SOA in the upper row are compared to performance of the optimal model, which best matches human behaviour, in the lower row. Iterations performed in the optimal model were rescaled to time by assuming a constant propagation speed mediated by the AF interactions (corresponding to 13.9 DVA per 200 ms, which was the average length of all contours in the stimulus ensembles). (A) and (C) show performances for different AF alignment jitters, for contours of length
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e111.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(color legend as inset to (C)). (B) and (D) show performances for different inter–element distances which are inversely proportional to the total number of edges in a contour, for an AF jitter of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e112.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(color legend as inset to (D)). Detection performance for the optimal model was averaged over 5000 samples from each contour ensemble, instead of using only 48 samples as in the experiment, to yield a better statistics and smoother curves.</p>
</caption>
<graphic xlink:href="pcbi.1002520.g005"></graphic>
</fig>
<p>For comparison, we performed contour integration with the ideal observer model eqn. (3) on the same stimuli as used in the experiment. As we had perfect knowledge about orientation and position of each edge in the stimuli, the factors
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e113.jpg" mimetype="image"></inline-graphic>
</inline-formula>
were all set to 1. By construction, the performance of an ideal observer must be superior or equal to any other observer, including human subjects.
<xref ref-type="fig" rid="pcbi-1002520-g004">
<bold>Fig. 4</bold>
B</xref>
(crosses) clearly shows that this is indeed the case, and that the ideal observer performs much better than humans. This large difference might be explained by a mixture of the following four factors:</p>
<list list-type="order">
<list-item>
<p>Human observers might be subject to (decision) noise which is external to the contour integration process, whereas the ideal observer is noise–free.</p>
</list-item>
<list-item>
<p>Information available to the ideal observer and to human subjects could be different.</p>
</list-item>
<list-item>
<p>Objectives of the human observers could be different, e.g. our chosen definition of contours could substantially diverge from which edge configurations humans interpret as contours.</p>
</list-item>
<list-item>
<p>It might be impossible for the brain to actually perform the computations needed for (approximate) inference in the given task, e.g. because of neurophysiological and anatomical constraints.</p>
</list-item>
</list>
<p>To which extents do these factors explain the observed differences in performance?</p>
<p>Using a detailed statistical analysis of human and ideal observer decisions (introduced in the next subsection), we will show that the factor noise contributes only to a small extent. In contrast, the factor available information is much more important. Information provided to the model and to human observers is clearly different, because the ideal model eqn. (3) uses for each stimulus the exact AF parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e114.jpg" mimetype="image"></inline-graphic>
</inline-formula>
that were used for creating the hidden contour, while subjects in our experiment did not have this information. They did not know which kind of contour to expect in the next stimulus, because it was selected from the stimulus pool in a random order (see
<xref ref-type="sec" rid="s4">Methods</xref>
). In consequence, the question arises whether one can find a
<italic>different</italic>
inference model from the
<italic>same</italic>
class of optimal models, however, with a fixed
<italic>single</italic>
set of parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e115.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, that can fully explain human behavior for all stimuli used in our experiment. Information available to this constrained model and to humans would then be identical. When applied to stimuli from an ensemble generated by an AF with different parameters, such a model would clearly no longer be ideal, but note that it would still represent an optimal estimator when used for contour stimuli from the ensemble that corresponds to its AF. If successful, this search would thereby yield an exact mathematical quantification of the assumed probabilistic objective for contour integration in human observers. Namely, it would reveal both, optimality per se and the specific ensemble of stimuli for which optimality holds, and it also would exclude computational constraints as reason for the observed difference in ideal and human performance.</p>
<p>To constrain the search for this ‘optimal’ model as strongly as possible, we will now introduce a novel statistical measure which quantifies how well a model predicts systematic human behavior. This measure extends beyond the usual approach of comparing average detection performance.</p>
</sec>
<sec id="s2c">
<title>Quantifying human decisions</title>
<p>The usual criterion for evaluating contour integration models is performance in correctly detecting contours (e.g.
<xref ref-type="bibr" rid="pcbi.1002520-Geisler2">[16]</xref>
). We have to require that a model at least reaches human performance. It may even exceed human performance substantially, as human decisions are usually subject to a fair amount of noise. Hence for assessing how well a model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e116.jpg" mimetype="image"></inline-graphic>
</inline-formula>
explains human performance, we determine the fraction
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e117.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of stimulus sets
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e118.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in which the model reaches or surpasses mean human performance. A stimulus set
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e119.jpg" mimetype="image"></inline-graphic>
</inline-formula>
hereby refers to the parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e120.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the generating process.</p>
<p>A less obvious, but much stronger criterion is to consider any unexpected or excess correlations within the decisions of different human subjects to one specific stimulus set. Here we aim at a measure that reflects the decisions that are common to different subjects. This measure would more sensitively quantify the constructive contributions of contour integration to behaviour as compared to the simple performance measure. Performance is strongly influenced by the general difficulty of a task, and by destructive sources of noise which could be external to the processes underlying contour detection.</p>
<p>To illustrate our approach consider a particularly simple example: Suppose that two human observers try to detect contours in a set of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e121.jpg" mimetype="image"></inline-graphic>
</inline-formula>
stimuli with the same statistical properties. Let us further assume that both observers reached the same performance
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e122.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in detecting a contour correctly. If detection errors are made randomly and independently of a particular stimulus, for example through noise in the contour integration or decision process, we expect to find on average
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e123.jpg" mimetype="image"></inline-graphic>
</inline-formula>
identical responses. We can now compare
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e124.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to the actually measured number of identical responses
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e125.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. If on average over different stimulus sets,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e126.jpg" mimetype="image"></inline-graphic>
</inline-formula>
turns out to be significantly larger than
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e127.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, we can conclude that the two observers are more strongly correlated than expected under the independency assumption.</p>
<p>With this heuristics in mind, we can now more generally derive our measure of excess correlations: we first compute the expected distribution
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e128.jpg" mimetype="image"></inline-graphic>
</inline-formula>
over the number of identical decisions
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e129.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in one stimulus set, assuming independent detection errors (see
<xref ref-type="sec" rid="s4">Methods</xref>
section). Using the actually measured
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e130.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, we then calculate the probability that sampling from
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e131.jpg" mimetype="image"></inline-graphic>
</inline-formula>
would yield a lower or equal value for
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e132.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Finally we average these probabilities for different stimulus conditions, thus obtaining a measure
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e133.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for excess correlations. This measure yields
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e134.jpg" mimetype="image"></inline-graphic>
</inline-formula>
if the distribution of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e135.jpg" mimetype="image"></inline-graphic>
</inline-formula>
's is similar to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e136.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Any value of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e137.jpg" mimetype="image"></inline-graphic>
</inline-formula>
being significantly larger than 1/2 confirms the existence of decision correlations not explained by mean performances. The symbol
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e138.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was choosen because our measure is related to the integral over a ROC curve, quantifying the distance between an expected distribution with an actually measured distribution.</p>
<p>Applying this analysis to pairs of human subjects, we found the values shown in
<xref ref-type="table" rid="pcbi-1002520-t001">
<bold>Table 1</bold>
</xref>
. Apart from subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e139.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, excess correlations are similar between pairs of subjects. Averaging over observer pairs yields a value of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e140.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which is significantly larger than 0.5 (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e141.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e142.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Significance was assessed by performing the same analysis on surrogate data generated by shuffling human decisions over the 48 stimuli for each stimulus ensemble. This procedure kept mean performance for each observer and stimulus ensemble constant, and allowed us to compute a value
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e143.jpg" mimetype="image"></inline-graphic>
</inline-formula>
which correlations had to exceed to be considered statistically significant w.r.t. the corresponding
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e144.jpg" mimetype="image"></inline-graphic>
</inline-formula>
-value. This result shows that human responses are far more correlated than expected from their average performances. It implies that particular stimuli, or whole stimulus subsets in a contour ensemble are either much easier, or more difficult to detect than others. For finding a good model of human contour integration, this means that additional information in correct or erroneous decisions can be exploited which is not contained in observer performance.</p>
<table-wrap id="pcbi-1002520-t001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002520.t001</object-id>
<label>Table 1</label>
<caption>
<title>Excess correlations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e145.jpg" mimetype="image"></inline-graphic>
</inline-formula>
between all subject pairs
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e146.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e147.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</title>
</caption>
<alternatives>
<graphic id="pcbi-1002520-t001-1" xlink:href="pcbi.1002520.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e148.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">Subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e149.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">Subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e150.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">Subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e151.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">Subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e152.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">Subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e153.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e154.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">0.686</td>
<td align="left" rowspan="1" colspan="1">0.694</td>
<td align="left" rowspan="1" colspan="1">0.616</td>
<td align="left" rowspan="1" colspan="1">0.689</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e155.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">0.690</td>
<td align="left" rowspan="1" colspan="1">0.650</td>
<td align="left" rowspan="1" colspan="1">0.629</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e156.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">0.623</td>
<td align="left" rowspan="1" colspan="1">0.723</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e157.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">0.658</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Subject
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e158.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>Excess correlations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e159.jpg" mimetype="image"></inline-graphic>
</inline-formula>
between all subject pairs
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e160.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e161.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Note that by definition,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e162.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. All values are significantly different from 0.5 (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e163.jpg" mimetype="image"></inline-graphic>
</inline-formula>
).</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>If the ideal observer was a good model whose superior performance is solely a consequence of being not subject to noise, it would fully capture this systematic behaviour, and (besides its far higher performance) reveal equal or higher excess correlations when its decisions are compared with human decisions. However, evaluation of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e164.jpg" mimetype="image"></inline-graphic>
</inline-formula>
between human and ideal observer yields a value of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e165.jpg" mimetype="image"></inline-graphic>
</inline-formula>
only, which is very close to chance level and far from
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e166.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>In essence, we need a different model which may have lower performance in our specific task, but must have higher predictive power for human behaviour. Using a model that is not ideal in the sense that it deviates from the process that generated the contours will lead to systematic misdetections in a 2–AFC setting, which is one possible cause of the human's excess correlations. In the following, we will use mean performances
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e167.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and excess correlations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e168.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to systematically search for such contour integration models with
<italic>fixed</italic>
parameters which quantitatively and individually explain human behavior in
<italic>all</italic>
experimental conditions.</p>
</sec>
<sec id="s2d">
<title>A generative model for human contour integration</title>
<p>The excess correlations of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e169.jpg" mimetype="image"></inline-graphic>
</inline-formula>
among human decisions constitute a benchmark for any proposed model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e170.jpg" mimetype="image"></inline-graphic>
</inline-formula>
: Instead of comparing pairs of two human observers, we will now compare a model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e171.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to human observers and require the excess correlations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e172.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to reach the same value as
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e173.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. As mentioned above, in contrast to human observers a model is not subject to external noise affecting the decisions. This makes a direct comparison of excess correlation values
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e174.jpg" mimetype="image"></inline-graphic>
</inline-formula>
problematic because it will necessarily lead to higher values in
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e175.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This statistical bias can be reduced by constructing a prototypical, noise-free human observer
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e176.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Its excess correlations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e177.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with the real, ‘noisy’ human observers (details see
<xref ref-type="sec" rid="s4">Methods</xref>
) constitute a more stringent benchmark value for the noise-free models. For our data, we obtained
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e178.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e179.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e180.jpg" mimetype="image"></inline-graphic>
</inline-formula>
).</p>
<p>We already explained that one reason for human observers exhibiting
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e181.jpg" mimetype="image"></inline-graphic>
</inline-formula>
could be that a stimulus ensemble obtained from one generative process contains subsets of contours which are consistently easier to detect than other subsets. This is indeed the case in our experiment, where each contour ensemble contains contours placed at different eccentricities from the fixation spot. It is known that for humans, contours close to the fovea are more easy to integrate
<xref ref-type="bibr" rid="pcbi.1002520-Nugent1">[20]</xref>
<xref ref-type="bibr" rid="pcbi.1002520-Hess2">[22]</xref>
than in the periphery, whereas the ideal model is translationally invariant and thus unaffected by the placement of a contour.</p>
<p>By searching for a model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e182.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e183.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e184.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, we favored models that replicate generic human behavior including correct but also erroneous decisions, rather than looking for an algorithm which has only the same, or higher, average contour integration performance. During our search we remained within the same class of (optimal) probabilistic models, but incorporated plausible constraints that relate to available prior knowledge. If successful, such a strategy will ultimately allow to explicitly state a probabilistic objective for human contour integration.</p>
<p>For finding an optimal model, we focused on two major determinants shaping human contour integration which have been identified by previous work
<xref ref-type="bibr" rid="pcbi.1002520-Field1">[4]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Kapadia1">[23]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Foley1">[24]</xref>
:</p>
<list list-type="bullet">
<list-item>
<p>Shape of the association field (AF): Although stimuli in our experiment were drawn from different AFs, we hypothesize that a single, general-purpose AF will be sufficient to model human contour integration. For parametrisation, we chose the product of von-Mises functions eqn. (9) which we originally used to create the stimuli (see
<xref ref-type="sec" rid="s4">methods</xref>
section), but varied the alignment and curvature parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e185.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e186.jpg" mimetype="image"></inline-graphic>
</inline-formula>
independently (
<xref ref-type="fig" rid="pcbi-1002520-g002">
<bold>Fig. 2</bold>
A,B</xref>
). For the radial part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e187.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the AF (
<xref ref-type="fig" rid="pcbi-1002520-g002">
<bold>Fig. 2</bold>
B</xref>
, left), we used an exponentially decaying function with spatial constant
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e188.jpg" mimetype="image"></inline-graphic>
</inline-formula>
degrees of visual angle,
<disp-formula>
<graphic xlink:href="pcbi.1002520.e189"></graphic>
<label>(4)</label>
</disp-formula>
</p>
</list-item>
<list-item>
<p>Modulation of edge saliency with eccentricity: Contour integration performance strongly depends on mean contour eccentricity
<xref ref-type="bibr" rid="pcbi.1002520-Nugent1">[20]</xref>
<xref ref-type="bibr" rid="pcbi.1002520-Hess2">[22]</xref>
. In our data, error rate on average increased from about 27 to 44 percent when eccentricity increased from about 2 to 11 degrees of visual angle (SOA = 200 ms).</p>
</list-item>
</list>
<p>The source for this effect may be rooted in the cortical magnification factor
<xref ref-type="bibr" rid="pcbi.1002520-Cowey1">[25]</xref>
, which decreases with eccentricity. This leaves less neurons per unit area of the visual field providing information about a stimulus, leading to more noisy representations of visual features. In a task with a short SOA, detectability of edges would hence decrease with eccentricity. In our framework this is modeled by decreasing the scaling factors
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e190.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(see eqn. (3) for contour integration. With eccentricity
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e191.jpg" mimetype="image"></inline-graphic>
</inline-formula>
defined as the Euclidean distance from the fixation spot, we parametrized the dependency of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e192.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e193.jpg" mimetype="image"></inline-graphic>
</inline-formula>
by
<disp-formula>
<graphic xlink:href="pcbi.1002520.e194"></graphic>
<label>(5)</label>
</disp-formula>
using the two parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e195.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e196.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for systematically varying this function (
<xref ref-type="fig" rid="pcbi-1002520-g002">
<bold>Fig. 2</bold>
C</xref>
). The amplitude
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e197.jpg" mimetype="image"></inline-graphic>
</inline-formula>
determines how strongly
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e198.jpg" mimetype="image"></inline-graphic>
</inline-formula>
varies with eccentricity (
<xref ref-type="fig" rid="pcbi-1002520-g002">
<bold>Fig. 2</bold>
C</xref>
, left), and the exponent
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e199.jpg" mimetype="image"></inline-graphic>
</inline-formula>
regulates how steeply
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e200.jpg" mimetype="image"></inline-graphic>
</inline-formula>
changes with eccentricity (
<xref ref-type="fig" rid="pcbi-1002520-g002">
<bold>Fig. 2</bold>
C</xref>
, center). For
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e201.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e202.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is concave down, and for
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e203.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e204.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is concave up.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e205.jpg" mimetype="image"></inline-graphic>
</inline-formula>
denotes maximum eccentricity in our setup which was 16.66 degrees of visual angle. For the special choice
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e206.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e207.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e208.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is constant and eqn. (3) would be identical to eqn. (1).</p>
<p>The four parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e209.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e210.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e211.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e212.jpg" mimetype="image"></inline-graphic>
</inline-formula>
now uniquely determine the model candidates
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e213.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>The results shown in
<xref ref-type="fig" rid="pcbi-1002520-g006">
<bold>Fig. 6</bold>
</xref>
demonstrate that reproducing correlated human decisions is a stronger constraint for model evaluation than accomplishing performances. For this didactic example we held
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e214.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e215.jpg" mimetype="image"></inline-graphic>
</inline-formula>
constant and only varied the association field parameters. While the performance score
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e216.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in
<xref ref-type="fig" rid="pcbi-1002520-g006">
<bold>Fig. 6</bold>
</xref>
A reaches one for a multitude of parameter combinations, correlation with human behavior
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e217.jpg" mimetype="image"></inline-graphic>
</inline-formula>
reveals a more distinct pattern where only few parameter combinations reach peak values. It can also be seen that varying these two parameters alone is not sufficient to reach the model benchmark of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e218.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Although reproducing
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e219.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is a strong selection criterion, we note that surpassing mean performance at the same time is a necessary second criterion, because high values of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e220.jpg" mimetype="image"></inline-graphic>
</inline-formula>
not always coincide with a sufficiently high mean performance (compare
<xref ref-type="fig" rid="pcbi-1002520-g006">
<bold>Fig. 6</bold>
A</xref>
with
<xref ref-type="fig" rid="pcbi-1002520-g006">
<bold>Fig. 6</bold>
B</xref>
).</p>
<fig id="pcbi-1002520-g006" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002520.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Comparing different measures for fitting the model to psychophysical data.</title>
<p>(A) Performance score
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e221.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and (B) excess correlations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e222.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for different models
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e223.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with independently varied association field parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e224.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e225.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
</caption>
<graphic xlink:href="pcbi.1002520.g006"></graphic>
</fig>
<p>We next varied the four parameters independently. Extensive simulations (the most relevant parts of the sampled parameter space are shown in
<xref ref-type="fig" rid="pcbi-1002520-g007">
<bold>Fig. 7</bold>
</xref>
) reveal one specific parameter combination,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e226.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, for which our best-performing model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e227.jpg" mimetype="image"></inline-graphic>
</inline-formula>
reaches
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e228.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, thus fully explaining human decisions for this experiment (sketch of AF and eccentricity scaling resulting from these parameters depicted in
<xref ref-type="fig" rid="pcbi-1002520-g002">
<bold>Fig. 2</bold>
B,C</xref>
, right graphs). Neither varying the parameters of the AF, nor varying the parameters of the eccentricity scaling alone yielded values of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e229.jpg" mimetype="image"></inline-graphic>
</inline-formula>
exceeding 0.63. From here on, we will refer to the model defined by this ‘best’ parameter set
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e230.jpg" mimetype="image"></inline-graphic>
</inline-formula>
as the ‘optimal model’, in order to distinguish it from the original, ideal model.</p>
<fig id="pcbi-1002520-g007" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002520.g007</object-id>
<label>Figure 7</label>
<caption>
<title>Searching for the best contour integration model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e231.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in a four-dimensional parameter space.</title>
<p>Excess correlations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e232.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are shown in color code (see color bar on the right). Each subfigure encloses the results for one specific choice of the scaling parameter
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e233.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and power coefficient
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e234.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e235.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e236.jpg" mimetype="image"></inline-graphic>
</inline-formula>
independently varied within the shown range. The parameter combination with highest
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e237.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is enclosed with a purple circle, and combinations for which contour integration performance was inferior to humans (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e238.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) are left white.</p>
</caption>
<graphic xlink:href="pcbi.1002520.g007"></graphic>
</fig>
<p>This result is surprising, because a structurally simple model with only four parameters captures the full variety of human behavior in our experiment, which was tested with a multitude of different stimulus sets. The performance of this noise-free model is still superior to human performance (
<xref ref-type="fig" rid="pcbi-1002520-g004">
<bold>Fig. 4</bold>
B</xref>
, open circles), but much closer to the experimental data (
<xref ref-type="fig" rid="pcbi-1002520-g004">
<bold>Fig. 4</bold>
A</xref>
) than in the ideal observer (
<xref ref-type="fig" rid="pcbi-1002520-g004">
<bold>Fig. 4</bold>
B</xref>
, crosses).</p>
<p>In order to compare model behaviour to temporal aspects of contour integration observed in human subjects (
<xref ref-type="fig" rid="pcbi-1002520-g005">
<bold>Fig. 5</bold>
A,B</xref>
), one can link iteration depth in eqn. (3) to SOA in the experiment. For this purpose, we assume that linking contour elements in the brain bases on neuronal signals that propagate with a constant velocity from edge detector to edge detector (possibly over several relay stations or interneurons). One iteration (i.e. matrix multiplication) in the Bayesian algorithm eqn. (3) would then correspond to the time
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e239.jpg" mimetype="image"></inline-graphic>
</inline-formula>
a neuronal signal needs to bridge the average distance
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e240.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of two edge elements linked by this iteration. As total contour length is constant in the experiment, the average element distance is proportional to the reciprocal of the total number of contour edges
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e241.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e242.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and performing
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e243.jpg" mimetype="image"></inline-graphic>
</inline-formula>
iterations then corresponds to real time
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e244.jpg" mimetype="image"></inline-graphic>
</inline-formula>
via
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e245.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Heuristically, using smaller SOAs in the experiment is similar to a reduction in the number of matrix multiplications, which in turn is formally equivalent to computing the likelihoods for edges belonging to contours with less elements.</p>
<p>By choosing
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e246.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, we assume that the largest SOA in the experiment corresponds to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e247.jpg" mimetype="image"></inline-graphic>
</inline-formula>
iterations (
<xref ref-type="fig" rid="pcbi-1002520-g005">
<bold>Fig. 5</bold>
C,D</xref>
). The temporal dynamics of the optimal model turns out to be remarkably similar to the time courses of human subject's performances for different SOAs (
<xref ref-type="fig" rid="pcbi-1002520-g005">
<bold>Fig. 5</bold>
A,B</xref>
). This indicates that the dynamics of human contour integration is at least compatible with a recurrent computation scheme.</p>
</sec>
<sec id="s2e">
<title>Predictions of the model</title>
<p>Model parameters and the dynamics of the recurrent algorithm make specific predictions for neurophysiological and behavioral variables.</p>
<p>The parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e248.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e249.jpg" mimetype="image"></inline-graphic>
</inline-formula>
suggest a specific shape for the AF that fits the behavioural data best. The investigations of Kapadia et al.
<xref ref-type="bibr" rid="pcbi.1002520-Kapadia1">[23]</xref>
provide independent data for a comparison of this shape to electrophysiological findings. They measured the modulation of the response (firing rate) of a cortical neuron in area V1 to an edge element within its classical receptive field in dependence on the presence of a second, flanking edge element presented at varying locations outside this region. The strength of this modulation reveals contextual interactions that could implement such an AF. Interestingly, the shape of this modulation curve at a distance of 0.5 degrees of visual angle from the receptive field's centre matches nicely with the shape of a cross-section through our AF with the optimal parameters (
<xref ref-type="fig" rid="pcbi-1002520-g008">
<bold>Fig. 8</bold>
A</xref>
).</p>
<fig id="pcbi-1002520-g008" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002520.g008</object-id>
<label>Figure 8</label>
<caption>
<title>Comparing the model to data from independent experiments.</title>
<p>(A) Comparison of association field parameters to electrophysiological data. Red, modulation index of firing rate of a cortical neuron to a preferred stimulus in dependence on the angular position of a second, flanking stimulus of same orientation (cross-section extracted from
<xref ref-type="fig" rid="pcbi-1002520-g002">Fig. 2C</xref>
in
<xref ref-type="bibr" rid="pcbi.1002520-Kapadia1">[23]</xref>
). Blue, cross-section through optimal association field, scaled to the maxima of the red curve. (B) Comparison of attenuation of edge likelihood with eccentricity with psychophysical data. Red, sensitivity modulation
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e250.jpg" mimetype="image"></inline-graphic>
</inline-formula>
required to explain psychophysical edge detection thresholds in dependence on eccentricity
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e251.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, (from
<xref ref-type="bibr" rid="pcbi.1002520-Foley1">[24]</xref>
). Blue, optimal likelihood modulation
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e252.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Parameters and equations see main text.</p>
</caption>
<graphic xlink:href="pcbi.1002520.g008"></graphic>
</fig>
<p>A second comparison can be made with AFs extracted from labeled contours in natural images
<xref ref-type="bibr" rid="pcbi.1002520-Geisler2">[16]</xref>
. In order to approximately match the angular characteristics (
<xref ref-type="fig" rid="pcbi-1002520-g003">
<bold>Fig. 3</bold>
B,C</xref>
in
<xref ref-type="bibr" rid="pcbi.1002520-Geisler2">[16]</xref>
) and spatial extension (
<xref ref-type="fig" rid="pcbi-1002520-g003">
<bold>Fig. 3</bold>
E</xref>
in
<xref ref-type="bibr" rid="pcbi.1002520-Geisler2">[16]</xref>
) of the edge co–occurrence statistics,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e253.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e254.jpg" mimetype="image"></inline-graphic>
</inline-formula>
have to be reduced by a factor of about 2–2.5. The reason for this deviation might be rooted in the mean edge distance considered in
<xref ref-type="bibr" rid="pcbi.1002520-Geisler2">[16]</xref>
, which is by about the same factor smaller than the mean distance used in our experiments. In fact, the largest distance considered in the edge co–occurrence statistics (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e255.jpg" mimetype="image"></inline-graphic>
</inline-formula>
degrees of visual angle) is even smaller than the smallest edge distance in our experiments (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e256.jpg" mimetype="image"></inline-graphic>
</inline-formula>
degrees of visual angle). Assuming that contour curvature is a critical parameter for contour integration, the maximum direction
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e257.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for which two edge elements are still integrated into a contour will depend on edge distance. In particular, if the mean distance between edge elements is reduced by a factor
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e258.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, the maximum direction
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e259.jpg" mimetype="image"></inline-graphic>
</inline-formula>
would then have to be reduced by about the same factor.</p>
<p>Parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e260.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e261.jpg" mimetype="image"></inline-graphic>
</inline-formula>
suggest a specific shape for the visibility of an edge, or the reliability of its neural representation, in dependence on visual field eccentricity. Also here, a comparison with independent data is possible: Foley et al.
<xref ref-type="bibr" rid="pcbi.1002520-Foley1">[24]</xref>
quantified psychophysical thresholds for detecting edge elements at varying eccentricities in the visual field. This study computed a sensitivity modulation factor
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e262.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for a Gabor-shaped receptive field required to explain psychophysical detection thresholds. The dependence of this factor
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e263.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on eccentricity
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e264.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was well–described by
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e265.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Again, comparison with the scaling function
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e266.jpg" mimetype="image"></inline-graphic>
</inline-formula>
using the parameters of the best model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e267.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, reveals a very close similiarity between the two curves (
<xref ref-type="fig" rid="pcbi-1002520-g008">
<bold>Fig. 8</bold>
B</xref>
).</p>
<p>Beyond comparing model properties to already existing experimental data, we obtained predictions which motivate further experiments. The probabilistic nature of our model requires stimulus evidence to be multiplicatively combined with recurrent feedback from neighboring edge elements. This feature is different from most current, biophysically motivated neural networks models performing contour integration by summing the corresponding inputs. Further simulations (not shown) suggest that a nonlinearity in synaptic integration of recurrent and feedforward inputs is indeed required for explaining human behavior
<xref ref-type="bibr" rid="pcbi.1002520-Schinkel1">[26]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-SchinkelBielefeld1">[27]</xref>
.</p>
<p>A further prediction derives from the unidirectional nature of contour creation, which suggests a similar unidirectional process also for contour integration. Unidirectionality significantly enhances performance in comparison to bidirectional interactions. In such a scenario, activation of neuronal feature detectors would spread into one direction along the contour, in contrast to classical contour integration models where association fields and functional interactions are not directionally biased. If neuronal populations would encode likelihoods for oriented image patches to be part of a contour, according to eqn. (3) these different coupling symmetries would predict different activation patterns for the populations receiving feedforward input from image patches which are parts of contours (
<xref ref-type="fig" rid="pcbi-1002520-g009">
<bold>Fig. 9</bold>
</xref>
). Specifically, unidirectional interactions would cause highest activity in neurons representing the end elements of a contour (
<xref ref-type="fig" rid="pcbi-1002520-g009">
<bold>Fig. 9</bold>
A</xref>
). In contrast, bidirectional interactions would cause highest activity at central elements of a contour (
<xref ref-type="fig" rid="pcbi-1002520-g009">
<bold>Fig. 9</bold>
B</xref>
). In addition, the model dynamics predicts oscillatory patterns which dampen over time until a stationary state is finally reached.</p>
<fig id="pcbi-1002520-g009" position="float">
<object-id pub-id-type="doi">10.1371/journal.pcbi.1002520.g009</object-id>
<label>Figure 9</label>
<caption>
<title>Predictions for different association field symmetries.</title>
<p>Average likelihoods
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e268.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to be the starting element of a contour of length
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e269.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, shown for all edge elements belonging to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e270.jpg" mimetype="image"></inline-graphic>
</inline-formula>
–element contours (left columns), and for background edges (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e271.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Vertical axis denotes iterations in eqn. (1), and the color scale is normalized to minimum/maximum likelihoods in each graph. (A) shows the corresponding dynamics for the optimal model which uses uni–directional AFs. For obtaining (B), we symmetrized the AF of the optimal model such that it became invariant to the directions of arbitrary edge pairs. For this bi–directional AF, simulations on the same stimuli as used for (A) were performed. If neuronal populations would encode these likelihoods
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e272.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, uni–directional interactions would cause highest activities at the end of a contour, while bi–directional interactions predict highest activities at center elements of a contour.</p>
</caption>
<graphic xlink:href="pcbi.1002520.g009"></graphic>
</fig>
</sec>
</sec>
<sec id="s3">
<title>Discussion</title>
<p>In summary, we proposed a model for contour integration whose parameters were calibrated to explain human decisions beyond average performances. In our experimental setting, it fully reproduces average human behavior. At the same time, the model possesses a well-defined probabilistic objective, i.e. it computes the likelihoods that observed edge elements belong to contours. For understanding recurrent computation in the visual system, our particular approach thus establishes a novel framework. Its distinctive feature is to quantitatively unite modeling and experimental data with a normative theory. If successful, such a framework allows to explictly specify a mathematically precise objective for a visual or cognitive function.</p>
<p>Qualitatively, the structure and – to a certain extent – the dynamics of our model are similar to models proposed by other studies
<xref ref-type="bibr" rid="pcbi.1002520-Hansen1">[28]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Li1">[29]</xref>
: elementary feature detectors are linked by connections which are positive (i.e., enhancing activation), if the features are aligned colinearily in retinal space, and activation of feature detectors is propagated in parallel to other detectors over these links. We also confirmed that the shape of interactions emerging from our parameter search is indeed very close to physiological data
<xref ref-type="bibr" rid="pcbi.1002520-Kapadia1">[23]</xref>
. A complementary idea was explored by Geisler et.al.
<xref ref-type="bibr" rid="pcbi.1002520-Geisler2">[16]</xref>
: instead of finding the ‘right’ shape of the AF by fitting a model to empirical data, they derived the corresponding statistics from natural images by computing the edge co–occurrence likelihoods from contours traced by human observers. As explained in the
<xref ref-type="sec" rid="s2">Results</xref>
section, our AF has about the same properties as the edge co–occurrence statistics if it is properly rescaled for smaller edge distances. Geisler et al. also used the AF in a proabilistic model to predict human performance in a contour integration paradigm. In general, these predictions were qualitatively very good, but human performance was not always fully reached by their model. In an interesting extension of this work, Geisler and Perry asked human observers whether two edges at the border of an occluder belong to the same or different physical contours
<xref ref-type="bibr" rid="pcbi.1002520-Geisler1">[15]</xref>
. In this task, the subjects achieved a performance similar to an ideal observer constructed from the statistics of labeled contours in natural images.</p>
<p>One major advantage of our specific framework is that it extends beyond matching performance only.
<xref ref-type="fig" rid="pcbi-1002520-g006">
<bold>Fig. 6</bold>
A</xref>
clearly exemplifies that many different models
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e273.jpg" mimetype="image"></inline-graphic>
</inline-formula>
can meet this benchmark. In consequence, one particular model reproducing performance might not tell us very much about the real structures, parameters, and dynamics that underlie contour integration in the brain. By requiring a model to reproduce systematic deviations from this average behaviour for individual stimuli, we exploit an additional source of information (beyond average task difficulty) which helps to narrow down the plethora of models considerably (
<xref ref-type="fig" rid="pcbi-1002520-g006">
<bold>Fig. 6</bold>
B</xref>
). To pinpoint the essence of this idea: by demanding models to deviate from ideal behaviour, and to make the same systematic errors as humans, we make them explain the data better. In the specific setting used in this work, systematic errors are explained by both, the decrease in edge visibility with eccentricity, and the particular parameters (shape) of the AF. Neither one of these two factors alone can explain the excess correlations between human observers to their full extent. In some examples where observers made errors, the target contour was located in the periphery, while a smaller ‘chance’ contour near the fovea was apparently more salient. In other cases where consistent errors among observers showed up, it was difficult to unambiguously determine the particular stimulus feature that led to the erroneous decisions. A more thorough analysis will require to determine where the observers ‘look’ if they search for a contour.</p>
<p>A second advantage of our approach is that during the search for a ‘better’ model, we remain within a class of probabilistic models with a well–defined objective: computing the probability of edge elements to belong to contours, whose statistical properties are quantified by the model's parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e274.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. By reproducing behaviour to the maximally possible extent, we finally arrive at a model which is no longer optimal with respect to the arbitrarily chosen task, but optimal with respect to a similar task and under the given constraints. In a broader context, our observations fit well to similar probabilistic frameworks that explain visual illusions, i.e. apparent failures of our visual system, by the idea that the current task does not match the objective or priors of visual information processing
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
.</p>
<p>In addition to this major conceptual point, our work sheds further light on the nature of contour integration: First, the dynamics of the iterative integration process in the model looks very similar to the performance of human observers in dependence on SOA time (
<xref ref-type="fig" rid="pcbi-1002520-g005">
<bold>Fig. 5</bold>
A,C</xref>
and
<xref ref-type="fig" rid="pcbi-1002520-g005">
<bold>Fig. 5</bold>
B,D</xref>
). A similar dynamics which also saturates after only a few recurrent cycles has been observed in a biophysically realisitic model with long–range excitatory interactions
<xref ref-type="bibr" rid="pcbi.1002520-Hansen1">[28]</xref>
. Somewhat counterintuitively, our model also reproduces the experimental fact that longer contours are perceived earlier, if decisions are based on the outputs of each iteration. These results underline that our framework is compatible with iterative information integration in the visual system. Second, we found that human observers have a performance considerably higher than chance level even if the contour was presented for only
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e275.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This result indicates that contour integration is a fast perceptive process
<xref ref-type="bibr" rid="pcbi.1002520-Mandon1">[30]</xref>
, which further constraints putative neural mechanisms.</p>
<p>We expect that our model generalizes well beyond explaining only our experimental data and reproducing specific observations
<xref ref-type="bibr" rid="pcbi.1002520-Kapadia1">[23]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Foley1">[24]</xref>
because the almost perfect match between model and human behavior does not result from overfitting: We used only four independent parameters to explain decisions for more than 2000 stimuli from 42 different contour ensembles. Moreover, we obtain more information from the fit than we initially put into the model: The ideal contour observer (which is the ‘inversion’ of the respective contour–generating process for each contour ensemble) is actually the worst in explaining the human data. Only after including realistic constraints as e.g. restricting the integrator to one association field, we were able to reproduce our psychophysical data. It will thus be interesting to see how our optimal model will perform on different contour integration problems. For example, comparison with the Geisler et al. data
<xref ref-type="bibr" rid="pcbi.1002520-Geisler2">[16]</xref>
suggests that for smaller mean edge distances (i.e., denser Gabor fields) than in our experiment, the angular parameters of the AF have to be rescaled. Furthermore, the model currently does not capture effects where cues in features other than the relative alignment of edges modulate contour integration. Such features include colour
<xref ref-type="bibr" rid="pcbi.1002520-Mathes1">[31]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Beaudot1">[32]</xref>
, contrast
<xref ref-type="bibr" rid="pcbi.1002520-Beaudot2">[33]</xref>
, or spatial frequency
<xref ref-type="bibr" rid="pcbi.1002520-Dakin1">[7]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Persike1">[34]</xref>
. It is not clear whether varying other features of the background or of the distracter elements will impair contour integration
<xref ref-type="bibr" rid="pcbi.1002520-Dakin1">[7]</xref>
, or have only a negligible effect on performance
<xref ref-type="bibr" rid="pcbi.1002520-Persike2">[35]</xref>
. A natural extension of our model would use an extended parametrization (i.e. orientation, spatial frequency, and colour instead of orientation only), and introduce interactions between similar feature combinations, thus mimicking the physiological observation that neurons with similar response properties have a higher probability to be connected.</p>
<p>A unique feature of the model is the directionality of its interactions, which is inherited from the directedness of the contour generation process
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
. For understanding its implications, consider for example contour integration along a straight, horizontal sequence of aligned horizontal edge elements: In a ‘classical’ contour integration model, each edge activates one feature detector with preferred horizontal orientation. Activation from this detector then symmetrically spreads to the left and to the right to the neighbouring detectors (bidirectional interactions). In contrast, in the probabilistic model each edge activates
<italic>two</italic>
detectors with the same preferred horizontal orientation. One of these detectors will then spread activation only to the left neighbouring detectors, while the other detector will spread activation only to the right neighbouring detectors. There is no crosstalk between the two detectors. Hence contour integration is performed by two independent processes propagating in parallel into two opposing directions along the contour (unidirectional interactions).</p>
<p>From a computational point of view, such unidirectional interactions are more efficient by avoiding false positives in contour detection
<xref ref-type="bibr" rid="pcbi.1002520-Braun1">[36]</xref>
. For example, they effectively suppress ‘contour’ configurations with changes in direction by 180 degrees, such as two circle segments that are attached tangentially at one of their ending points. In fact, comparisons of further simulations (not shown) with our psychophysical data suggest that bidirectional couplings normally used in contour integration models can not even explain human contour integration
<italic>performance</italic>
<xref ref-type="bibr" rid="pcbi.1002520-Schinkel1">[26]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-SchinkelBielefeld1">[27]</xref>
.</p>
<p>Is our contour integration model biophysically plausible? Its interactions needed to perform contour integration could be mediated by orientation–specific connections between cortical neurons. Examples for such connections, which preferentially link neurons with similar orientation selectivity, are horizontal long–ranging axons within primary visual cortex (V1)
<xref ref-type="bibr" rid="pcbi.1002520-Bosking1">[37]</xref>
<xref ref-type="bibr" rid="pcbi.1002520-Stettler1">[39]</xref>
, or backprojections from secondary visual cortex (V2) to V1
<xref ref-type="bibr" rid="pcbi.1002520-Stettler1">[39]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Shmuel1">[40]</xref>
. Our results show that one single, ‘general purpose’ association field is sufficient to quantitatively explain human behavior in response to stimuli generated from multiple AFs. Thus in principle, only one ‘set’ of cortical long–ranging axons with a geometry matching the AF of our optimal model is sufficient to perform contour integration in the brain, if this structure is used iteratively in a recurrent computation. However, the variety of geometries and length scales associated with these connections in different animals makes it currently difficult to determine the real extent to which they support contour integration. In addition, implementing unidirectional interactions anatomically would require two distinct neural populations with similar preferred orientations, but asymmetric dendritic trees. Such a structure currently seems to be in conflict with experimental evidence (homogeneous populations, largely symmetric dendritic trees, as e.g. shown in
<xref ref-type="bibr" rid="pcbi.1002520-Bosking1">[37]</xref>
), although its existence can not be fully excluded from these studies.</p>
<p>Regarding the dynamics of contour integration, the probabilistic model performs inference by iteratively using parallel computations that can easily be emulated by neural networks. For example, the matrix–vector multiplication
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e276.jpg" mimetype="image"></inline-graphic>
</inline-formula>
can be re–interpreted as the summation of pre–synaptic afferents
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e277.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, weighted by the synaptic efficacies
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e278.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, on the dendrites of a post–synaptic neuron
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e279.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. These interactions lead to a modulation of an edge detectors' activity by the presence of other edges in its neighborhood. It is known that starting from V1, neural responses indeed become modulated by the context of a stimulus within their ‘classical’ receptive field
<xref ref-type="bibr" rid="pcbi.1002520-Sillito1">[41]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Levitt1">[42]</xref>
. This modulation can enhance firing rates for colinear edge configurations
<xref ref-type="bibr" rid="pcbi.1002520-Kapadia2">[43]</xref>
<xref ref-type="bibr" rid="pcbi.1002520-Li3">[45]</xref>
, and can cause neurons to be active also in response to illusory contours
<xref ref-type="bibr" rid="pcbi.1002520-Maertens1">[46]</xref>
. One problem with the accumulated evidence for contextual modulations as putative signatures for contour integration processes in cortex is their controversity. For instance, substantial suppressive effects for colinear edge arrangements
<xref ref-type="bibr" rid="pcbi.1002520-Polat1">[47]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Walker1">[48]</xref>
have also been observed. In addition, firing rate modulations are often weak or critically depend on the exact stimulus configuration, which stands in contrast to the strong and robust effects established by psychophysical studies. Despite this sometimes confusing empirical evidence,
<xref ref-type="fig" rid="pcbi-1002520-g008">
<bold>Fig. 8</bold>
A</xref>
demonstrates that modulation of activity induced by neighboring contour elements in the optimal model matches very well to electrophysiological data. In addition, neural dynamics might also provide a more realistic mechanism for establishing unidirectional contour integration than requiring a directed anatomical substrate as discussed above. Unidirectionality can be realized by volleys of activity, which propagate along the neural populations activated by the contour's edges
<xref ref-type="bibr" rid="pcbi.1002520-Abeles1">[49]</xref>
. Refractoriness of neurons would be the basic mechanism that ensures that activation waves can not easily reverse direction. One possibility to test our prediction of an unidirectional process underlying contour integration is to perform massively parallel recordings in animals performing contour integration. Focusing on the activation dynamics of neurons whose receptive fields cover distinct elements of the contour would allow to directly observe activity waves propagating in a certain direction. An alternative, but more indirect test could focus on the specific predictions (
<xref ref-type="fig" rid="pcbi-1002520-g009">
<bold>Fig. 9</bold>
</xref>
) made by models with different coupling symmetries. Here it would be sufficient to record and compare neural activity of neuronal populations representing central edges and starting/ending edges of contours, respectively. Experimentally, this scenario is technically less demanding as it only requires single–unit recordings. Unidirectionality then predicts highest activity at starting/ending edges, while bidirectional models predict the opposite behaviour. Although there is yet no experiment addressing this issue, recent neurophysiological recordings of V1 neurons
<xref ref-type="bibr" rid="pcbi.1002520-McManus1">[50]</xref>
which were stimulated by edge elements being part of a contour show a very similar time course of activation: a strong transient response, followed by a dampened oscillation that relaxates into a sustained activation level.</p>
<p>A remarkable difference to more ‘standard’ neural networks
<xref ref-type="bibr" rid="pcbi.1002520-Hansen1">[28]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Li1">[29]</xref>
is that the afferent input (i.e., evidence from the stimulus) and the recurrent feedback (i.e., linking probabilities between edges) are combined multiplicatively instead of additively to produce a unit's output. The utility of such a non–linear operation for contour integration was indeed suggested by previous modeling work on feature integration
<xref ref-type="bibr" rid="pcbi.1002520-Grossberg1">[51]</xref>
. It is known that non–linear computations on synaptic inputs are performed as early as from LGN and primary visual cortex on
<xref ref-type="bibr" rid="pcbi.1002520-Carandini1">[52]</xref>
<xref ref-type="bibr" rid="pcbi.1002520-Salinas1">[54]</xref>
, and it is possible that these non–linearities provide the substrate required to compute the AND-like operations necessary for implementing Bayesian inference.</p>
<p>Evolution has adapted information processing in the brain to serve many objectives still awaiting discovery. While for some simple and very fundamental tasks, experiments could demonstrate that perception can be described as optimal inference
<xref ref-type="bibr" rid="pcbi.1002520-Ernst1">[11]</xref>
,
<xref ref-type="bibr" rid="pcbi.1002520-Krding1">[12]</xref>
, there are many reports from psychophysics that suggest the visual system to not operate optimally. The notion of optimality, however, is (a) relative to some external criteria (i.e., the task design) that must not neccessarily be evolutionary relevant, and (b) need to take constraints into account. These considerations might have prohibited the application of normative approaches to more complex visual functions, as e.g. the perception of objects. In our work we overcame these difficulties by starting with a probabilistic framework whose basic mathematical structures were motivated by known properties of human contour integration. This framework provides both, a task design for experiments or simulations, and an initial suggestion for a computational model. Introducing realistic constraints and fitting the model's structure to human decisions finally revealed that also human contour integration can be well described as optimal inference on a sensory stimulus. Moreover, our results demonstrate that such an integrative approach may generate fundamental predictions about neural mechanisms that are difficult to obtain in a pure bottom–up modelling approach.</p>
</sec>
<sec sec-type="methods" id="s4">
<title>Methods</title>
<sec id="s4a">
<title>Generative model for contours</title>
<p>We adapted the framework by Williams and Thornber
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
to contours of finite length which are generated by a Markov process: Let
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e280.jpg" mimetype="image"></inline-graphic>
</inline-formula>
denote an edge element with associated direction
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e281.jpg" mimetype="image"></inline-graphic>
</inline-formula>
at coordinates (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e282.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e283.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), in two-dimensional space. If a contour passes through edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e284.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e285.jpg" mimetype="image"></inline-graphic>
</inline-formula>
defines the probability that the contour will pass next through edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e286.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(transition probability or ‘association field’). Contours of length
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e287.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are generated by first positioning a starting edge at a random position, and then sampling a sequence of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e288.jpg" mimetype="image"></inline-graphic>
</inline-formula>
further edges from the association field
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e289.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
</sec>
<sec id="s4b">
<title>Defining an association field</title>
<p>For a meaningful definition of contours,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e290.jpg" mimetype="image"></inline-graphic>
</inline-formula>
should possess a translational symmetry (same probability for creating a specific edge configuration at different locations), a rotational symmetry (same probability for creating an identical, but rotated contour), and a reversal symmetry (same probability for creating a contour with the reverse sequence of edges)
<xref ref-type="bibr" rid="pcbi.1002520-Williams1">[13]</xref>
. These symmetries effectively reduce the six-dimensional conditional probability distribution
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e291.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to a three-dimensional function
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e292.jpg" mimetype="image"></inline-graphic>
</inline-formula>
which depends on the parameters
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e293.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e294.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. For two edges
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e295.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, these parameters are given by the coordinate transformation
<disp-formula>
<graphic xlink:href="pcbi.1002520.e296"></graphic>
<label>(6)</label>
</disp-formula>
<disp-formula>
<graphic xlink:href="pcbi.1002520.e297"></graphic>
<label>(7)</label>
</disp-formula>
<disp-formula>
<graphic xlink:href="pcbi.1002520.e298"></graphic>
<label>(8)</label>
</disp-formula>
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e299.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the Euclidean distance between edges
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e300.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e301.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e302.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the angle under which an observer at edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e303.jpg" mimetype="image"></inline-graphic>
</inline-formula>
looking into the direction
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e304.jpg" mimetype="image"></inline-graphic>
</inline-formula>
views edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e305.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e306.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the difference between the directions of edges
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e307.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e308.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(see
<xref ref-type="fig" rid="pcbi-1002520-g002">
<bold>Fig. 2</bold>
A</xref>
).</p>
<p>We defined
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e309.jpg" mimetype="image"></inline-graphic>
</inline-formula>
as a product of a radial part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e310.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and an angular part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e311.jpg" mimetype="image"></inline-graphic>
</inline-formula>
via
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e312.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The radial part will be described in the next subsection. The angular part was parametrized as a product of von–Mises functions
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e313.jpg" mimetype="image"></inline-graphic>
</inline-formula>
that correspond to Gaussian distributions defined on a circular support,
<disp-formula>
<graphic xlink:href="pcbi.1002520.e314"></graphic>
<label>(9)</label>
</disp-formula>
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e315.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the circular mean,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e316.jpg" mimetype="image"></inline-graphic>
</inline-formula>
a concentration parameter (length scale), and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e317.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the angular variable.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e318.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the Bessel function of the first kind, of order
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e319.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. By the transformation
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e320.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e321.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is related to the width
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e322.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of a Gaussian distribution. The parametrization of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e323.jpg" mimetype="image"></inline-graphic>
</inline-formula>
then reads
<disp-formula>
<graphic xlink:href="pcbi.1002520.e324"></graphic>
<label>(10)</label>
</disp-formula>
<disp-formula>
<graphic xlink:href="pcbi.1002520.e325"></graphic>
<label>(11)</label>
</disp-formula>
We used
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e326.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to abbreviate a normalisation factor given by
<disp-formula>
<graphic xlink:href="pcbi.1002520.e327"></graphic>
<label>(12)</label>
</disp-formula>
This particular choice of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e328.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<xref ref-type="fig" rid="pcbi-1002520-g002">
<bold>Fig. 2</bold>
B</xref>
, centre) implements two important principles for association fields, namely (I) link probability decreases on a length scale
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e329.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with the distance from a co-circular edge configuration with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e330.jpg" mimetype="image"></inline-graphic>
</inline-formula>
<xref ref-type="bibr" rid="pcbi.1002520-Parent1">[19]</xref>
, and (II) link probability decreases on a length scale
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e331.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with increasing curvature
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e332.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(for co-circular edge configurations with inter-edge distance
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e333.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e334.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is directly related to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e335.jpg" mimetype="image"></inline-graphic>
</inline-formula>
via
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e336.jpg" mimetype="image"></inline-graphic>
</inline-formula>
).</p>
</sec>
<sec id="s4c">
<title>Hiding contours among distracters</title>
<p>The idea of this contour integration experiment is to hide a contour among randomly oriented distractor elements, and to force a human observer to use the relative alignment between the edges as the only cue to find the contour. This implies removing all other hints about the location of the contour, as e.g. element distances or densities, from a stimulus. For this purpose we employed an improved procedure similar to the algorithm proposed by Braun
<xref ref-type="bibr" rid="pcbi.1002520-Braun1">[36]</xref>
: Starting from a regular positioning of edge elements filling the background around a contour, these elements are subjected to a Brownian motion until a dynamical steady state is reached. Typically this procedure yields an edge distance distribution
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e337.jpg" mimetype="image"></inline-graphic>
</inline-formula>
between background elements which differs from the contour edge distance distribution given by
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e338.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. We therefore replaced our initial
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e339.jpg" mimetype="image"></inline-graphic>
</inline-formula>
by
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e340.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and repeated the whole procedure iteratively until the (I) background-background edge distance distribution, (II) background-contour edge distance distribution, and (III) contour-contour edge distance distribution were identical.</p>
<p>When generating a contour with large curvature in the first place, it could happen that two distant edges will overlap when they are rendered for stimulus display as finite-width Gabors. We omitted this problem by randomly permuting the sequence of relative angles
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e341.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e342.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and distances
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e343.jpg" mimetype="image"></inline-graphic>
</inline-formula>
between subsequent contour elements until any overlap vanished. By this policy we prevent giving unwanted cues to the location of a contour, while at the same time conserving the pairwise edge statistics of contour ensembles implied by a specific association field.</p>
</sec>
<sec id="s4d">
<title>Detecting contours by inference</title>
<p>In our paradigm, a contour is placed either in the left or in the right hemifield of a stimulus, and hidden among distracters. An observer has then to decide on which hemifield the contour has been placed (two-alternative forced-choice). We now derive an optimal contour observer for this situation:</p>
<p>A stimulus
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e344.jpg" mimetype="image"></inline-graphic>
</inline-formula>
decomposes into a part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e345.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on the left, and a part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e346.jpg" mimetype="image"></inline-graphic>
</inline-formula>
on the right hemifield. Each part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e347.jpg" mimetype="image"></inline-graphic>
</inline-formula>
consists of a set of edge elements, in which any combination of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e348.jpg" mimetype="image"></inline-graphic>
</inline-formula>
edges could correspond to the hidden contour. We call a specific edge combination a
<italic>contour configuration</italic>
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e349.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which is an ordered set of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e350.jpg" mimetype="image"></inline-graphic>
</inline-formula>
edge elements. Index
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e351.jpg" mimetype="image"></inline-graphic>
</inline-formula>
runs from
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e352.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e353.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which is the total number of all different, putative contour configurations in stimulus part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e354.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Note that different configurations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e355.jpg" mimetype="image"></inline-graphic>
</inline-formula>
may be composed of the same edge elements, but in a different ordering.</p>
<p>We now compute the probability that a contour placed into
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e356.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is contained in stimulus part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e357.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. To simplify notation, we denote the contour configuration we are looking for with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e358.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and the (unordered) set of background elements with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e359.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. We have to sum over all
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e360.jpg" mimetype="image"></inline-graphic>
</inline-formula>
possible contour configurations:
<disp-formula>
<graphic xlink:href="pcbi.1002520.e361"></graphic>
<label>(13)</label>
</disp-formula>
The right hand side can be expressed in terms of the likelihood
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e362.jpg" mimetype="image"></inline-graphic>
</inline-formula>
that configuration
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e363.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was obtained from the generative contour model,
<disp-formula>
<graphic xlink:href="pcbi.1002520.e364"></graphic>
<label>(14)</label>
</disp-formula>
Next we express the likelihood
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e365.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in terms of the association field
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e366.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. With
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e367.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e368.jpg" mimetype="image"></inline-graphic>
</inline-formula>
denoting two arbitrary edges in stimulus part
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e369.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, we define the components
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e370.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of likelihood matrices
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e371.jpg" mimetype="image"></inline-graphic>
</inline-formula>
by sampling from the association field
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e372.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. For a specific configuration
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e373.jpg" mimetype="image"></inline-graphic>
</inline-formula>
where the index sequence
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e374.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e375.jpg" mimetype="image"></inline-graphic>
</inline-formula>
defines the succession of edges,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e376.jpg" mimetype="image"></inline-graphic>
</inline-formula>
can be written as
<disp-formula>
<graphic xlink:href="pcbi.1002520.e377"></graphic>
<label>(15)</label>
</disp-formula>
In the final step, we split the sum over all edge configurations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e378.jpg" mimetype="image"></inline-graphic>
</inline-formula>
into a sum over all edges
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e379.jpg" mimetype="image"></inline-graphic>
</inline-formula>
where a contour can start, and a sum over all edge configurations that have the same starting edge
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e380.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<disp-formula>
<graphic xlink:href="pcbi.1002520.e381"></graphic>
<label>(16)</label>
</disp-formula>
with the appropriate normalization terms from the denominator in eqn. (14). Here we introduced
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e382.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to denote the total number of edge elements in hemifield
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e383.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. If the probability for a contour in a specific hemifield is 1/2, the ideal contour integrator will estimate that the contour was placed on the left hemifield if
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e384.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Thus eqn. (16) corresponds to eqn. (2).</p>
</sec>
<sec id="s4e">
<title>Stimulus sets</title>
<p>For the psychophysical experiments and model simulations, we used
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e385.jpg" mimetype="image"></inline-graphic>
</inline-formula>
different parameter sets
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e386.jpg" mimetype="image"></inline-graphic>
</inline-formula>
degrees for the shape of the association field eqn. (11). We used
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e387.jpg" mimetype="image"></inline-graphic>
</inline-formula>
different numbers of contour elements (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e388.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) while holding the length of the contour approximately constant, which causes the average inter-element distance
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e389.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to be proportional to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e390.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. All combinations of these parameters gave a total of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e391.jpg" mimetype="image"></inline-graphic>
</inline-formula>
stimulus conditions.</p>
<p>For each stimulus condition,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e392.jpg" mimetype="image"></inline-graphic>
</inline-formula>
contours (targets) were generated and each embedded into randomly oriented background elements according to the procedure outlined above (with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e393.jpg" mimetype="image"></inline-graphic>
</inline-formula>
contours on the left hemifield and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e394.jpg" mimetype="image"></inline-graphic>
</inline-formula>
contours on the right hemifield). The inter-edge distance statistics approximately followed an exponentially decaying function. While our procedure suppresses all first-order cues from the inter-edge distance statistics, there is a remote possibility that observers might use second- or higher order cues to locate the contour. This problem was avoided by generating a second contour path on the hemifield opposite to the target contour, but randomly choosen orientations for the edge elements. For each of these stimuli, masks were generated with edges of random orientations at the same positions.</p>
<p>For stimulus presentation, contour and mask stimuli were rendered by placing Gabor elements
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e395.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with spatial extent
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e396.jpg" mimetype="image"></inline-graphic>
</inline-formula>
degrees visual angle (8 pixels), wavelength
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e397.jpg" mimetype="image"></inline-graphic>
</inline-formula>
degrees visual angle(16 pixels), random phase
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e398.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and orientation
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e399.jpg" mimetype="image"></inline-graphic>
</inline-formula>
centered at the corresponding positions
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e400.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e401.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e402.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was defined as
<disp-formula>
<graphic xlink:href="pcbi.1002520.e403"></graphic>
<label>(17)</label>
</disp-formula>
with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e404.jpg" mimetype="image"></inline-graphic>
</inline-formula>
denoting the contrast and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e405.jpg" mimetype="image"></inline-graphic>
</inline-formula>
the mean background luminance. The mean distance between the contour elements for
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e406.jpg" mimetype="image"></inline-graphic>
</inline-formula>
corresponds to
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e407.jpg" mimetype="image"></inline-graphic>
</inline-formula>
degrees visual angle.</p>
</sec>
<sec id="s4f">
<title>Psychophysical experiments</title>
<p>
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e408.jpg" mimetype="image"></inline-graphic>
</inline-formula>
subjects (2 female, mean age 29.4 years) participated in the two-alternative forced choice (2-AFC) experiment. All had normal or corrected-to-normal vision. They sat 80 cm in front of a gamma-corrected, 21–inches CRT screen (1152×864 pixels, 100 Hz refresh rate). Each trial started with the appearance of a small fixation spot in the display center.</p>
<p>After a fixation period of 1 s, a contour stimulus was presented which was followed by its corresponding mask after a time
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e409.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(stimulus onset asynchronies,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e410.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Presentation of the mask lasted for 500 ms, followed by a blank screen. Observers were instructed to indicate the hemifield where the contour had been displayed (left or right) by pressing one of two response buttons during the blank period at the end of each trial. Responses occuring too early or too late (
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e411.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) after mask offset were rejected. In summary, each observer had to detect
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e412.jpg" mimetype="image"></inline-graphic>
</inline-formula>
contours for each of the five SOAs. For assessing decision correlations between subjects, we used the same 2016 stimuli for different observers, but presented them in a randomly interleaved order which was different for each subject.</p>
</sec>
<sec id="s4g">
<title>Statistical methods</title>
<p>We evaluate the similarity between a model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e413.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and our
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e414.jpg" mimetype="image"></inline-graphic>
</inline-formula>
human observers by comparing their mean contour detection performances and their individual decisions.</p>
<p>Let
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e415.jpg" mimetype="image"></inline-graphic>
</inline-formula>
denote the score of an observer
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e416.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for one stimulus, with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e417.jpg" mimetype="image"></inline-graphic>
</inline-formula>
if the hemifield with the contour was identified correctly, and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e418.jpg" mimetype="image"></inline-graphic>
</inline-formula>
otherwise. With
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e419.jpg" mimetype="image"></inline-graphic>
</inline-formula>
indexing one out of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e420.jpg" mimetype="image"></inline-graphic>
</inline-formula>
stimulus conditions, the total number
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e421.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of correctly detected contours is given by
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e422.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The percentage
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e423.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of conditions in which model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e424.jpg" mimetype="image"></inline-graphic>
</inline-formula>
has an equal or higher contour detection performance is then given by
<disp-formula>
<graphic xlink:href="pcbi.1002520.e425"></graphic>
<label>(18)</label>
</disp-formula>
with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e426.jpg" mimetype="image"></inline-graphic>
</inline-formula>
denoting the Heaviside function.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e427.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is our first benchmark for comparing models to humans.</p>
<p>Next we consider the number
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e428.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of identical responses of two observers
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e429.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e430.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(which could either be two humans, or one human and one model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e431.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), in stimulus condition
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e432.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<disp-formula>
<graphic xlink:href="pcbi.1002520.e433"></graphic>
<label>(19)</label>
</disp-formula>
We will now compare this value to the probability
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e434.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to obtain
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e435.jpg" mimetype="image"></inline-graphic>
</inline-formula>
identical responses, provided that in total,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e436.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e437.jpg" mimetype="image"></inline-graphic>
</inline-formula>
contours were detected correctly by observer
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e438.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e439.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, respectively. The basic assumption hereby is that contour detection errors are made independently of a specific stimulus within a stimulus condition
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e440.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e441.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is easily computed by considering the number of possibilities how
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e442.jpg" mimetype="image"></inline-graphic>
</inline-formula>
identical responses can be distributed among the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e443.jpg" mimetype="image"></inline-graphic>
</inline-formula>
stimuli, while holding
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e444.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e445.jpg" mimetype="image"></inline-graphic>
</inline-formula>
constant. Introducing
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e446.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, which is related to the other variables via
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e447.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, we obtain
<disp-formula>
<graphic xlink:href="pcbi.1002520.e448"></graphic>
<label>(20)</label>
</disp-formula>
</p>
<p>We finally compare the expected distribution of identical responses
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e449.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with the actually measured value
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e450.jpg" mimetype="image"></inline-graphic>
</inline-formula>
by computing the total probability
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e451.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to obtain a value
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e452.jpg" mimetype="image"></inline-graphic>
</inline-formula>
which is equal or lower than
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e453.jpg" mimetype="image"></inline-graphic>
</inline-formula>
,
<disp-formula>
<graphic xlink:href="pcbi.1002520.e454"></graphic>
<label>(21)</label>
</disp-formula>
Because
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e455.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is a discrete probability distribution, we need to add a continuity correction for
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e456.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(last term). This term ensures that
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e457.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is on average 0.5 when
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e458.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is drawn from
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e459.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The average of
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e460.jpg" mimetype="image"></inline-graphic>
</inline-formula>
over all possible observer combinations
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e461.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e462.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and over all stimulus conditions
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e463.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, yields a number
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e464.jpg" mimetype="image"></inline-graphic>
</inline-formula>
which is larger than 0.5 if observers' decisions are more strongly correlated than can be expected from our independency assumption. For the human observers in our experiment we obtain
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e465.jpg" mimetype="image"></inline-graphic>
</inline-formula>
according to
<disp-formula>
<graphic xlink:href="pcbi.1002520.e466"></graphic>
<label>(22)</label>
</disp-formula>
The decisions of a specific model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e467.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are compared to all human observers via
<disp-formula>
<graphic xlink:href="pcbi.1002520.e468"></graphic>
<label>(23)</label>
</disp-formula>
For judging the similarity between a model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e469.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and human observers, we can not directly compare
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e470.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e471.jpg" mimetype="image"></inline-graphic>
</inline-formula>
: as argued in the main text, humans are subject to decision noise, but the model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e472.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is not. Therefore the model will give identical responses in all repetitions of the same trial, while two humans would give different responses even if they had the same objective. Hence if we find a model
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e473.jpg" mimetype="image"></inline-graphic>
</inline-formula>
which perfectly explains human behavior,
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e474.jpg" mimetype="image"></inline-graphic>
</inline-formula>
will always be larger than
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e475.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. To remove this statistical bias, we construct
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e476.jpg" mimetype="image"></inline-graphic>
</inline-formula>
hypothetical, noisefree human ‘prototypes’
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e477.jpg" mimetype="image"></inline-graphic>
</inline-formula>
from the majority vote of the
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e478.jpg" mimetype="image"></inline-graphic>
</inline-formula>
human observers
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e479.jpg" mimetype="image"></inline-graphic>
</inline-formula>
with
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e480.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The decisions of these prototypes are thus given by
<disp-formula>
<graphic xlink:href="pcbi.1002520.e481"></graphic>
<label>(24)</label>
</disp-formula>
By comparing the prototypes
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e482.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to their real human counterparts
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e483.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, using the statistical methods as described above, we obtain
<disp-formula>
<graphic xlink:href="pcbi.1002520.e484"></graphic>
<label>(25)</label>
</disp-formula>
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e485.jpg" mimetype="image"></inline-graphic>
</inline-formula>
defines our second benchmark for comparing models to humans: If
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e486.jpg" mimetype="image"></inline-graphic>
</inline-formula>
approximates
<inline-formula>
<inline-graphic xlink:href="pcbi.1002520.e487.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, the corresponding model reproduces both, the nature and the amount of correlations in human behavior.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We would like to thank Jochen Braun for helpful discussions.</p>
</ack>
<fn-group>
<fn fn-type="conflict">
<p>The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>This work was supported by the BMBF (Bernstein Gruppe Bremen, grant no. 01GQ0705). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="pcbi.1002520-Hess1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hess</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Field</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Integration of contours: new insights.</article-title>
<source>Trends Cogn Sci</source>
<volume>3</volume>
<fpage>480</fpage>
<lpage>486</lpage>
<pub-id pub-id-type="pmid">10562727</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Kovacs1">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kovacs</surname>
<given-names>I</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Gestalten of today: early processing of visual contours and surfaces.</article-title>
<source>Behav Brain Res</source>
<volume>82</volume>
<fpage>1</fpage>
<lpage>11</lpage>
<pub-id pub-id-type="pmid">9021065</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Graham1">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Graham</surname>
<given-names>N</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Beyond multiple pattern analyzers modeled as linear filters (as classical V1 simple cells): Useful additions of the last 25 years.</article-title>
<source>Vision Res</source>
<volume>51</volume>
<fpage>1397</fpage>
<lpage>1430</lpage>
<pub-id pub-id-type="pmid">21329718</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Field1">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Field</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Hayes</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Hess</surname>
<given-names>RF</given-names>
</name>
</person-group>
<year>1993</year>
<article-title>Contour integration by the human visual system: Evidence for a local association field.</article-title>
<source>Vision Res</source>
<volume>33</volume>
<fpage>173</fpage>
<lpage>193</lpage>
<pub-id pub-id-type="pmid">8447091</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Strother1">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Strother</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Kubovy</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>On the surprising saliency of curvature in grouping by proximity.</article-title>
<source>J Exp Psychol Human</source>
<volume>32</volume>
<fpage>226</fpage>
<lpage>234</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-May1">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>May</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Hess</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Effects of element separation and carrier wavelength on detection of snakes and ladders: Implications for models of contour integration.</article-title>
<source>J Vision</source>
<volume>8</volume>
<fpage>1</fpage>
<lpage>23</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Dakin1">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dakin</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Hess</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Spatial frequency tuning of visual contour extraction.</article-title>
<source>J Opt Soc Am</source>
<volume>15</volume>
<fpage>1486</fpage>
<lpage>1499</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Wertheimer1">
<label>8</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wertheimer</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>1938</year>
<article-title>Laws of organization in perceptual forms.</article-title>
<source>A source book of Gestalt psychology</source>
<publisher-loc>London</publisher-loc>
<publisher-name>Routledge & Kegan Paul (Reprint 1999,2000,2001: Oxon: Routledge)</publisher-name>
<fpage>71</fpage>
<lpage>88</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Koffka1">
<label>9</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Koffka</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>1935</year>
<source>Principles of gestalt psychology</source>
<publisher-loc>London</publisher-loc>
<publisher-name>Routledge & Kegan Paul Ltd. (Reprint 1999,2001: London: Routledge)</publisher-name>
<size units="page">732</size>
</element-citation>
</ref>
<ref id="pcbi.1002520-vonHelmholtz1">
<label>10</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>von Helmholtz</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1856–1867</year>
<source>Handbuch der physiologischen Optik</source>
<publisher-loc>Leipzig</publisher-loc>
<publisher-name>Voss</publisher-name>
<size units="page">1334</size>
</element-citation>
</ref>
<ref id="pcbi.1002520-Ernst1">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion.</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Krding1">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Bayesian integration in sensorimotor learning.</article-title>
<source>Nature</source>
<volume>427</volume>
<fpage>244</fpage>
<lpage>247</lpage>
<pub-id pub-id-type="pmid">14724638</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Williams1">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Williams</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Thornber</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Orientation, scale, and discontinuity as emergent properties of illusionary contour shape.</article-title>
<source>Neural Comput</source>
<volume>13</volume>
<fpage>1683</fpage>
<lpage>1711</lpage>
<pub-id pub-id-type="pmid">11506666</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Mumford1">
<label>14</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Mumford</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1994</year>
<article-title>Elastica and computer vision.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Bajaj</surname>
<given-names>C</given-names>
</name>
</person-group>
<source>Algebraic geometry and its applications</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>Springer-Verlag</publisher-name>
<fpage>491</fpage>
<lpage>506</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Geisler1">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Geisler</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Perry</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Contour statistics in natural images: Grouping across occlusions.</article-title>
<source>Visual Neurosci</source>
<volume>26</volume>
<fpage>109</fpage>
<lpage>121</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Geisler2">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Geisler</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Perry</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Super</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Gallogly</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Edge co-occurence in natural images predicts contour grouping performance.</article-title>
<source>Vision Res</source>
<volume>41</volume>
<fpage>711</fpage>
<lpage>724</lpage>
<pub-id pub-id-type="pmid">11248261</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Grigorescu1">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grigorescu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Petkov</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Westenberg</surname>
<given-names>MA</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Contour detection based on nonclassical receptive field inhibition.</article-title>
<source>IEEE T Image Process</source>
<volume>12</volume>
<fpage>729</fpage>
<lpage>739</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Martin1">
<label>18</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Martin</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Fowlkes</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Tal</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Malik</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics.</article-title>
<source>Proc. 8th Int'l Conf. Computer Vision; 7–14 July 2001</source>
<publisher-loc>Vancouver, British Columbia, Canadae</publisher-loc>
<publisher-name>ICCV 2001</publisher-name>
<fpage>416</fpage>
<lpage>423</lpage>
<comment>Availabl: doi:10.1109/ICCV.2001.937714. Accessed 12 April 2012</comment>
</element-citation>
</ref>
<ref id="pcbi.1002520-Parent1">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parent</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Zucker</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>1989</year>
<article-title>Trace inference, curvature consistency, and curve detection.</article-title>
<source>IEEE T Pattern Anal</source>
<volume>11</volume>
<fpage>823</fpage>
<lpage>839</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Nugent1">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nugent</surname>
<given-names>AK</given-names>
</name>
<name>
<surname>Keswani</surname>
<given-names>RN</given-names>
</name>
<name>
<surname>Woods</surname>
<given-names>RL</given-names>
</name>
<name>
<surname>Peli</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Contour integration in peripheral vision reduces gradually with eccentricity.</article-title>
<source>Vision Res</source>
<volume>43</volume>
<fpage>2427</fpage>
<lpage>2137</lpage>
<pub-id pub-id-type="pmid">12972393</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Lovell1">
<label>21</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lovell</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2002</year>
<source>Evaluating accounts of human contour intgration using psychophysical and computational methods [PhD dissertation]</source>
<publisher-loc>Stirling (UK)</publisher-loc>
<publisher-name>Department of Psychology, University of Stirling</publisher-name>
<comment>Available:
<ext-link ext-link-type="uri" xlink:href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.99.2006&rep=rep1&type=pdf">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.99.2006&rep=rep1&type=pdf</ext-link>
. Accessed 12 April 2012</comment>
<size units="page">226</size>
</element-citation>
</ref>
<ref id="pcbi.1002520-Hess2">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hess</surname>
<given-names>RF</given-names>
</name>
<name>
<surname>Dakin</surname>
<given-names>SC</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>Absence of contour linking in peripheral vision.</article-title>
<source>Nature</source>
<volume>390</volume>
<fpage>602</fpage>
<lpage>604</lpage>
<pub-id pub-id-type="pmid">9403687</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Kapadia1">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kapadia</surname>
<given-names>MK</given-names>
</name>
<name>
<surname>Westheimer</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Gilbert</surname>
<given-names>CD</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Spatial distribution of contextual interactions in primary visual cortex and in visual perception.</article-title>
<source>J Neurophysiol</source>
<volume>84</volume>
<fpage>2048</fpage>
<lpage>2064</lpage>
<pub-id pub-id-type="pmid">11024097</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Foley1">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Foley</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Varadharajan</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Koh</surname>
<given-names>CC</given-names>
</name>
<name>
<surname>Farias</surname>
<given-names>MCQ</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Detection of gabor patterns of different sizes, shapes, phases and eccentricities.</article-title>
<source>Vision Res</source>
<volume>47</volume>
<fpage>85</fpage>
<lpage>107</lpage>
<pub-id pub-id-type="pmid">17078992</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Cowey1">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cowey</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Rolls</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>1974</year>
<article-title>Human cortical magnification factor and its relation to visual acuity.</article-title>
<source>Exp Brain Res</source>
<volume>21</volume>
<fpage>447</fpage>
<lpage>454</lpage>
<pub-id pub-id-type="pmid">4442497</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Schinkel1">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schinkel</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Pawelzik</surname>
<given-names>KR</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>UA</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Optimal contour integration: When additive algorithms fail.</article-title>
<source>Neurocomputing</source>
<volume>69</volume>
<fpage>1297</fpage>
<lpage>1300</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-SchinkelBielefeld1">
<label>27</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Schinkel-Bielefeld</surname>
<given-names>N</given-names>
</name>
</person-group>
<year>2007</year>
<source>Contour integration models predicting human behavior [PhD dissertation]</source>
<publisher-loc>Bremen (Germany)</publisher-loc>
<publisher-name>Institute for Theoretical Physics, University of Bremen</publisher-name>
<comment>Available:
<ext-link ext-link-type="uri" xlink:href="http://nbn-resolving.de/urn:nbn:de:gbv:46-diss000108845">http://nbn-resolving.de/urn:nbn:de:gbv:46-diss000108845</ext-link>
. Accessed 12 April 2012</comment>
<size units="page">190</size>
</element-citation>
</ref>
<ref id="pcbi.1002520-Hansen1">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hansen</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Neumann</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>A recurrent model of contour integration in primary visual cortex.</article-title>
<source>J Vision</source>
<volume>8</volume>
<fpage>1</fpage>
<lpage>25</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Li1">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>Z</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>A neural model of contour integration in the primary visual cortex.</article-title>
<source>Neural Comput</source>
<volume>10</volume>
<fpage>903</fpage>
<lpage>940</lpage>
<pub-id pub-id-type="pmid">9573412</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Mandon1">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mandon</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kreiter</surname>
<given-names>AK</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Rapid contour integration in macaque monkeys.</article-title>
<source>Vision Res</source>
<volume>45</volume>
<fpage>291</fpage>
<lpage>300</lpage>
<pub-id pub-id-type="pmid">15607346</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Mathes1">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mathes</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Fahle</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>The electrophysiological correlate of contour integration is similar for colour and luminance mechanisms.</article-title>
<source>Psychophysiology</source>
<volume>44</volume>
<fpage>305</fpage>
<lpage>322</lpage>
<pub-id pub-id-type="pmid">17343713</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Beaudot1">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beaudot</surname>
<given-names>WHA</given-names>
</name>
<name>
<surname>Mullen</surname>
<given-names>KT</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>How long range is contour integration in human colour vision?</article-title>
<source>Visual Neurosci</source>
<volume>20</volume>
<fpage>51</fpage>
<lpage>64</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Beaudot2">
<label>33</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beaudot</surname>
<given-names>WHA</given-names>
</name>
<name>
<surname>Mullen</surname>
<given-names>KT</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Processing time of contour integration: The role of colour, contrast, and curvature.</article-title>
<source>Perception</source>
<volume>30</volume>
<fpage>833</fpage>
<lpage>853</lpage>
<pub-id pub-id-type="pmid">11515956</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Persike1">
<label>34</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Persike</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Olzak</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Meinhardt</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Contour integration across spatial frequency.</article-title>
<source>J Exp Psychol Human</source>
<volume>35</volume>
<fpage>1629</fpage>
<lpage>1648</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Persike2">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Persike</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Meinhardt</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Cue summation enables perceptual grouping.</article-title>
<source>J Exp Psychol Human</source>
<volume>34</volume>
<fpage>1</fpage>
<lpage>26</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Braun1">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Braun</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>On the detection of salient contours.</article-title>
<source>Spatial Vision</source>
<volume>12</volume>
<fpage>211</fpage>
<lpage>225</lpage>
<pub-id pub-id-type="pmid">10221428</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Bosking1">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bosking</surname>
<given-names>WH</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Schofield</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Fitzpatrick</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex.</article-title>
<source>J Neurosci</source>
<volume>17</volume>
<fpage>2112</fpage>
<lpage>2127</lpage>
<pub-id pub-id-type="pmid">9045738</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Chisum1">
<label>38</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chisum</surname>
<given-names>HJ</given-names>
</name>
<name>
<surname>Mooser</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Fitzpatrick</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Emergent properties of layer 2/3 neurons reflect the collinear arrangement of horizontal connections in tree shrew visual cortex.</article-title>
<source>J Neurosci</source>
<volume>23</volume>
<fpage>2947</fpage>
<lpage>2960</lpage>
<pub-id pub-id-type="pmid">12684482</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Stettler1">
<label>39</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stettler</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Das</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Bennett</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Gilbert</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Lateral connectivity and contextual interactions in macaque primary visual cortex.</article-title>
<source>Neuron</source>
<volume>36</volume>
<fpage>739</fpage>
<lpage>750</lpage>
<pub-id pub-id-type="pmid">12441061</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Shmuel1">
<label>40</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shmuel</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Korman</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sterkin</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Harel</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ullman</surname>
<given-names>S</given-names>
</name>
<etal></etal>
</person-group>
<year>2005</year>
<article-title>Retinotopic axis specificity and selective clustering of feedback projections from V2 to V1 in the owl monkey.</article-title>
<source>J Neurosci</source>
<volume>25</volume>
<fpage>2117</fpage>
<lpage>2131</lpage>
<pub-id pub-id-type="pmid">15728852</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Sillito1">
<label>41</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sillito</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Grieve</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Cudeiro</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Visual cortical mechanisms detecting focal orientation discontinuities.</article-title>
<source>Nature</source>
<volume>378</volume>
<fpage>492</fpage>
<lpage>496</lpage>
<pub-id pub-id-type="pmid">7477405</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Levitt1">
<label>42</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levitt</surname>
<given-names>JB</given-names>
</name>
<name>
<surname>Lund</surname>
<given-names>JS</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>Spatial summation properties of macaque striate neurons.</article-title>
<source>Soc Neurosci Abstr</source>
<volume>23</volume>
<fpage>455</fpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Kapadia2">
<label>43</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kapadia</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ito</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Gilbert</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Westheimer</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Improvement in visual sensitivity by changes in local context: Parallel studies in human observers and in V1 of alert monkeys.</article-title>
<source>Neuron</source>
<volume>15</volume>
<fpage>843</fpage>
<lpage>856</lpage>
<pub-id pub-id-type="pmid">7576633</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Li2">
<label>44</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Piëch</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Gilbert</surname>
<given-names>CD</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Contour saliency in primary visual cortex.</article-title>
<source>Neuron</source>
<volume>50</volume>
<fpage>951</fpage>
<lpage>962</lpage>
<pub-id pub-id-type="pmid">16772175</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Li3">
<label>45</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Piëch</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Gilbert</surname>
<given-names>CD</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Learning to link visual contours.</article-title>
<source>Neuron</source>
<volume>57</volume>
<fpage>442</fpage>
<lpage>451</lpage>
<pub-id pub-id-type="pmid">18255036</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Maertens1">
<label>46</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maertens</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Pollmann</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>fMRI reveals a common neural substrate of illusory contours and real contours in V1 after perceptual learning.</article-title>
<source>J Cognitive Neurosci</source>
<volume>17:10</volume>
<fpage>1553</fpage>
<lpage>1564</lpage>
</element-citation>
</ref>
<ref id="pcbi.1002520-Polat1">
<label>47</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Polat</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Mizobe</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Pettet</surname>
<given-names>MW</given-names>
</name>
<name>
<surname>Kasamatsu</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Norcia</surname>
<given-names>AM</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Collinear stimuli regulate visual responses depending on cell's contrast threshold.</article-title>
<source>Nature</source>
<volume>391</volume>
<fpage>580</fpage>
<lpage>584</lpage>
<pub-id pub-id-type="pmid">9468134</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Walker1">
<label>48</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walker</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Ohzawa</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Freeman</surname>
<given-names>RD</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Asymmetric suppression outside the classical receptive field of the visual cortex.</article-title>
<source>J Neurosci</source>
<volume>19</volume>
<fpage>10536</fpage>
<lpage>10553</lpage>
<pub-id pub-id-type="pmid">10575050</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Abeles1">
<label>49</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Abeles</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>1991</year>
<source>Corticonics: Neural Circuits of the Cerebral Cortex</source>
<publisher-loc>Cambrige</publisher-loc>
<publisher-name>Cambridge University Press</publisher-name>
<size units="page">296</size>
</element-citation>
</ref>
<ref id="pcbi.1002520-McManus1">
<label>50</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McManus</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Gilbert</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Adaptive shape processing in primary visual cortex.</article-title>
<source>Proc Natl Acad Sci U S A</source>
<volume>108</volume>
<fpage>9739</fpage>
<lpage>9746</lpage>
<pub-id pub-id-type="pmid">21571645</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Grossberg1">
<label>51</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grossberg</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Mingolla</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>1985</year>
<article-title>Neural dynamics of perceptual grouping: Textures, boundaries, and emergent segmentations.</article-title>
<source>Percept Psychophys</source>
<volume>38</volume>
<fpage>141</fpage>
<lpage>171</lpage>
<pub-id pub-id-type="pmid">4088806</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Carandini1">
<label>52</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carandini</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Heeger</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1994</year>
<article-title>Summation and division by neurons in primate visual cortex.</article-title>
<source>Science</source>
<volume>264</volume>
<fpage>1333</fpage>
<lpage>1336</lpage>
<pub-id pub-id-type="pmid">8191289</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Murphy1">
<label>53</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Murphy</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Miller</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Multiplicative gain changes are induced by excitation or inhibition alone.</article-title>
<source>J Neurosci</source>
<volume>23</volume>
<fpage>10040</fpage>
<lpage>10051</lpage>
<pub-id pub-id-type="pmid">14602818</pub-id>
</element-citation>
</ref>
<ref id="pcbi.1002520-Salinas1">
<label>54</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Salinas</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Abbott</surname>
<given-names>L</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>A model of multiplicative neural responses in parietal cortex.</article-title>
<source>Proc Natl Acad Sci US A</source>
<volume>93</volume>
<fpage>11956</fpage>
<lpage>11961</lpage>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002158 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002158 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3360074
   |texte=   Optimality of Human Contour Integration
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:22654653" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024