Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Bayesian integration of position and orientation cues in perception of biological and non-biological forms

Identifieur interne : 001D51 ( Pmc/Curation ); précédent : 001D50; suivant : 001D52

Bayesian integration of position and orientation cues in perception of biological and non-biological forms

Auteurs : Steven M. Thurman [États-Unis] ; Hongjing Lu [États-Unis]

Source :

RBID : PMC:3932410

Abstract

Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.


Url:
DOI: 10.3389/fnhum.2014.00091
PubMed: 24605096
PubMed Central: 3932410

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3932410

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Bayesian integration of position and orientation cues in perception of biological and non-biological forms</title>
<author>
<name sortKey="Thurman, Steven M" sort="Thurman, Steven M" uniqKey="Thurman S" first="Steven M." last="Thurman">Steven M. Thurman</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Lu, Hongjing" sort="Lu, Hongjing" uniqKey="Lu H" first="Hongjing" last="Lu">Hongjing Lu</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Department of Statistics, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24605096</idno>
<idno type="pmc">3932410</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3932410</idno>
<idno type="RBID">PMC:3932410</idno>
<idno type="doi">10.3389/fnhum.2014.00091</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">001D51</idno>
<idno type="wicri:Area/Pmc/Curation">001D51</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Bayesian integration of position and orientation cues in perception of biological and non-biological forms</title>
<author>
<name sortKey="Thurman, Steven M" sort="Thurman, Steven M" uniqKey="Thurman S" first="Steven M." last="Thurman">Steven M. Thurman</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Lu, Hongjing" sort="Lu, Hongjing" uniqKey="Lu H" first="Hongjing" last="Lu">Hongjing Lu</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Department of Statistics, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Human Neuroscience</title>
<idno type="eISSN">1662-5161</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Aggarwal, J K" uniqKey="Aggarwal J">J. K. Aggarwal</name>
</author>
<author>
<name sortKey="Cai, Q" uniqKey="Cai Q">Q. Cai</name>
</author>
<author>
<name sortKey="Liao, W" uniqKey="Liao W">W. Liao</name>
</author>
<author>
<name sortKey="Sabata, B" uniqKey="Sabata B">B. Sabata</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aggarwal, J K" uniqKey="Aggarwal J">J. K. Aggarwal</name>
</author>
<author>
<name sortKey="Nandhakumar, N" uniqKey="Nandhakumar N">N. Nandhakumar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amano, K" uniqKey="Amano K">K. Amano</name>
</author>
<author>
<name sortKey="Edwards, M" uniqKey="Edwards M">M. Edwards</name>
</author>
<author>
<name sortKey="Badcock, D R" uniqKey="Badcock D">D. R. Badcock</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Atkinson, A P" uniqKey="Atkinson A">A. P. Atkinson</name>
</author>
<author>
<name sortKey="Tunstall, M L" uniqKey="Tunstall M">M. L. Tunstall</name>
</author>
<author>
<name sortKey="Dittrich, W H" uniqKey="Dittrich W">W. H. Dittrich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bardi, L" uniqKey="Bardi L">L. Bardi</name>
</author>
<author>
<name sortKey="Regolin, L" uniqKey="Regolin L">L. Regolin</name>
</author>
<author>
<name sortKey="Simion, F" uniqKey="Simion F">F. Simion</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beintema, J A" uniqKey="Beintema J">J. A. Beintema</name>
</author>
<author>
<name sortKey="Georg, K" uniqKey="Georg K">K. Georg</name>
</author>
<author>
<name sortKey="Lappe, M" uniqKey="Lappe M">M. Lappe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beintema, J A" uniqKey="Beintema J">J. A. Beintema</name>
</author>
<author>
<name sortKey="Lappe, M" uniqKey="Lappe M">M. Lappe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blake, R" uniqKey="Blake R">R. Blake</name>
</author>
<author>
<name sortKey="Shiffrar, M" uniqKey="Shiffrar M">M. Shiffrar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brainard, D H" uniqKey="Brainard D">D. H. Brainard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burr, D C" uniqKey="Burr D">D. C. Burr</name>
</author>
<author>
<name sortKey="Wijesundra, S A" uniqKey="Wijesundra S">S. A. Wijesundra</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Casile, A" uniqKey="Casile A">A. Casile</name>
</author>
<author>
<name sortKey="Giese, M A" uniqKey="Giese M">M. A. Giese</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chang, D H F" uniqKey="Chang D">D. H. F. Chang</name>
</author>
<author>
<name sortKey="Troje, N F" uniqKey="Troje N">N. F. Troje</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dakin, S C" uniqKey="Dakin S">S. C. Dakin</name>
</author>
<author>
<name sortKey="Williams, C B" uniqKey="Williams C">C. B. Williams</name>
</author>
<author>
<name sortKey="Hess, R F" uniqKey="Hess R">R. F. Hess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Daugman, J G" uniqKey="Daugman J">J. G. Daugman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Day, M" uniqKey="Day M">M. Day</name>
</author>
<author>
<name sortKey="Loffler, G" uniqKey="Loffler G">G. Loffler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Feldman, J" uniqKey="Feldman J">J. Feldman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fox, R" uniqKey="Fox R">R. Fox</name>
</author>
<author>
<name sortKey="Mcdaniel, C" uniqKey="Mcdaniel C">C. McDaniel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Giese, M A" uniqKey="Giese M">M. A. Giese</name>
</author>
<author>
<name sortKey="Poggio, T" uniqKey="Poggio T">T. Poggio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goodale, M A" uniqKey="Goodale M">M. A. Goodale</name>
</author>
<author>
<name sortKey="Milner, A D" uniqKey="Milner A">A. D. Milner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grossman, E" uniqKey="Grossman E">E. Grossman</name>
</author>
<author>
<name sortKey="Donnelly, M" uniqKey="Donnelly M">M. Donnelly</name>
</author>
<author>
<name sortKey="Price, R" uniqKey="Price R">R. Price</name>
</author>
<author>
<name sortKey="Pickens, D" uniqKey="Pickens D">D. Pickens</name>
</author>
<author>
<name sortKey="Morgan, V" uniqKey="Morgan V">V. Morgan</name>
</author>
<author>
<name sortKey="Neighbor, G" uniqKey="Neighbor G">G. Neighbor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grossman, E D" uniqKey="Grossman E">E. D. Grossman</name>
</author>
<author>
<name sortKey="Jardine, N L" uniqKey="Jardine N">N. L. Jardine</name>
</author>
<author>
<name sortKey="Pyles, J A" uniqKey="Pyles J">J. A. Pyles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hess, R F" uniqKey="Hess R">R. F. Hess</name>
</author>
<author>
<name sortKey="Hayes, A" uniqKey="Hayes A">A. Hayes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hess, R F" uniqKey="Hess R">R. F. Hess</name>
</author>
<author>
<name sortKey="Holliday, I E" uniqKey="Holliday I">I. E. Holliday</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hiris, E" uniqKey="Hiris E">E. Hiris</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hu, W" uniqKey="Hu W">W. Hu</name>
</author>
<author>
<name sortKey="Tan, T" uniqKey="Tan T">T. Tan</name>
</author>
<author>
<name sortKey="Wang, L" uniqKey="Wang L">L. Wang</name>
</author>
<author>
<name sortKey="Maybank, S" uniqKey="Maybank S">S. Maybank</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jastorff, J" uniqKey="Jastorff J">J. Jastorff</name>
</author>
<author>
<name sortKey="Orban, G A" uniqKey="Orban G">G. A. Orban</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jastorff, J" uniqKey="Jastorff J">J. Jastorff</name>
</author>
<author>
<name sortKey="Orban, G A" uniqKey="Orban G">G. A. Orban</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jastorff, J" uniqKey="Jastorff J">J. Jastorff</name>
</author>
<author>
<name sortKey="Popivanov, I D" uniqKey="Popivanov I">I. D. Popivanov</name>
</author>
<author>
<name sortKey="Vogels, R" uniqKey="Vogels R">R. Vogels</name>
</author>
<author>
<name sortKey="Vanduffel, W" uniqKey="Vanduffel W">W. Vanduffel</name>
</author>
<author>
<name sortKey="Orban, G A" uniqKey="Orban G">G. A. Orban</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, D C" uniqKey="Knill D">D. C. Knill</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A. Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landy, M S" uniqKey="Landy M">M. S. Landy</name>
</author>
<author>
<name sortKey="Maloney, L T" uniqKey="Maloney L">L. T. Maloney</name>
</author>
<author>
<name sortKey="Johnston, E B" uniqKey="Johnston E">E. B. Johnston</name>
</author>
<author>
<name sortKey="Young, M" uniqKey="Young M">M. Young</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lange, J" uniqKey="Lange J">J. Lange</name>
</author>
<author>
<name sortKey="Georg, K" uniqKey="Georg K">K. Georg</name>
</author>
<author>
<name sortKey="Lappe, M" uniqKey="Lappe M">M. Lappe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lange, J" uniqKey="Lange J">J. Lange</name>
</author>
<author>
<name sortKey="Lappe, M" uniqKey="Lappe M">M. Lappe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, A L" uniqKey="Lee A">A. L. Lee</name>
</author>
<author>
<name sortKey="Lu, H" uniqKey="Lu H">H. Lu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levi, D M" uniqKey="Levi D">D. M. Levi</name>
</author>
<author>
<name sortKey="Klein, S A" uniqKey="Klein S">S. A. Klein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levi, D M" uniqKey="Levi D">D. M. Levi</name>
</author>
<author>
<name sortKey="Klein, S A" uniqKey="Klein S">S. A. Klein</name>
</author>
<author>
<name sortKey="Sharma, V" uniqKey="Sharma V">V. Sharma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levi, D M" uniqKey="Levi D">D. M. Levi</name>
</author>
<author>
<name sortKey="Sharma, V" uniqKey="Sharma V">V. Sharma</name>
</author>
<author>
<name sortKey="Klein, S A" uniqKey="Klein S">S. A. Klein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, Z" uniqKey="Liu Z">Z. Liu</name>
</author>
<author>
<name sortKey="Knill, D C" uniqKey="Knill D">D. C. Knill</name>
</author>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D. Kersten</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Loffler, G" uniqKey="Loffler G">G. Loffler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lu, H" uniqKey="Lu H">H. Lu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lu, H" uniqKey="Lu H">H. Lu</name>
</author>
<author>
<name sortKey="Liu, Z" uniqKey="Liu Z">Z. Liu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neri, P" uniqKey="Neri P">P. Neri</name>
</author>
<author>
<name sortKey="Morrone, M C" uniqKey="Morrone M">M. C. Morrone</name>
</author>
<author>
<name sortKey="Burr, D C" uniqKey="Burr D">D. C. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oram, M W" uniqKey="Oram M">M. W. Oram</name>
</author>
<author>
<name sortKey="Perrett, D I" uniqKey="Perrett D">D. I. Perrett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pelli, D G" uniqKey="Pelli D">D. G. Pelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pinto, J" uniqKey="Pinto J">J. Pinto</name>
</author>
<author>
<name sortKey="Shiffrar, M" uniqKey="Shiffrar M">M. Shiffrar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Poljac, E" uniqKey="Poljac E">E. Poljac</name>
</author>
<author>
<name sortKey="Verfaillie, K" uniqKey="Verfaillie K">K. Verfaillie</name>
</author>
<author>
<name sortKey="Wagemans, J" uniqKey="Wagemans J">J. Wagemans</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Puce, A" uniqKey="Puce A">A. Puce</name>
</author>
<author>
<name sortKey="Perrett, D" uniqKey="Perrett D">D. Perrett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Regolin, L" uniqKey="Regolin L">L. Regolin</name>
</author>
<author>
<name sortKey="Tommasi, L" uniqKey="Tommasi L">L. Tommasi</name>
</author>
<author>
<name sortKey="Vallortigara, G" uniqKey="Vallortigara G">G. Vallortigara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saygin, A P" uniqKey="Saygin A">A. P. Saygin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Simion, F" uniqKey="Simion F">F. Simion</name>
</author>
<author>
<name sortKey="Regolin, L" uniqKey="Regolin L">L. Regolin</name>
</author>
<author>
<name sortKey="Bulf, H" uniqKey="Bulf H">H. Bulf</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Singer, J M" uniqKey="Singer J">J. M. Singer</name>
</author>
<author>
<name sortKey="Sheinberg, D L" uniqKey="Sheinberg D">D. L. Sheinberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sumi, S" uniqKey="Sumi S">S. Sumi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Theusner, S" uniqKey="Theusner S">S. Theusner</name>
</author>
<author>
<name sortKey="De Lussanet, M" uniqKey="De Lussanet M">M. de Lussanet</name>
</author>
<author>
<name sortKey="Lappe, M" uniqKey="Lappe M">M. Lappe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, J C" uniqKey="Thompson J">J. C. Thompson</name>
</author>
<author>
<name sortKey="Baccus, W" uniqKey="Baccus W">W. Baccus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thurman, S M" uniqKey="Thurman S">S. M. Thurman</name>
</author>
<author>
<name sortKey="Giese, M A" uniqKey="Giese M">M. A. Giese</name>
</author>
<author>
<name sortKey="Grossman, E D" uniqKey="Grossman E">E. D. Grossman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thurman, S M" uniqKey="Thurman S">S. M. Thurman</name>
</author>
<author>
<name sortKey="Grossman, E D" uniqKey="Grossman E">E. D. Grossman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thurman, S M" uniqKey="Thurman S">S. M. Thurman</name>
</author>
<author>
<name sortKey="Lu, H" uniqKey="Lu H">H. Lu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thurman, S M" uniqKey="Thurman S">S. M. Thurman</name>
</author>
<author>
<name sortKey="Lu, H" uniqKey="Lu H">H. Lu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Toet, A" uniqKey="Toet A">A. Toet</name>
</author>
<author>
<name sortKey="Koenderink, J J" uniqKey="Koenderink J">J. J. Koenderink</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Troje, N F" uniqKey="Troje N">N. F. Troje</name>
</author>
<author>
<name sortKey="Westhoff, C" uniqKey="Westhoff C">C. Westhoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ungerleider, L" uniqKey="Ungerleider L">L. Ungerleider</name>
</author>
<author>
<name sortKey="Mishkin, M" uniqKey="Mishkin M">M. Mishkin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vallortigara, G" uniqKey="Vallortigara G">G. Vallortigara</name>
</author>
<author>
<name sortKey="Regolin, L" uniqKey="Regolin L">L. Regolin</name>
</author>
<author>
<name sortKey="Marconato, F" uniqKey="Marconato F">F. Marconato</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Boxtel, J J A" uniqKey="Van Boxtel J">J. J. A. Van Boxtel</name>
</author>
<author>
<name sortKey="Lu, H" uniqKey="Lu H">H. Lu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Boxtel, J J" uniqKey="Van Boxtel J">J. J. van Boxtel</name>
</author>
<author>
<name sortKey="Lu, H" uniqKey="Lu H">H. Lu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vangeneugden, J" uniqKey="Vangeneugden J">J. Vangeneugden</name>
</author>
<author>
<name sortKey="De Maziere, P A" uniqKey="De Maziere P">P. A. De Mazière</name>
</author>
<author>
<name sortKey="Van Hulle, M M" uniqKey="Van Hulle M">M. M. Van Hulle</name>
</author>
<author>
<name sortKey="Jaeggli, T" uniqKey="Jaeggli T">T. Jaeggli</name>
</author>
<author>
<name sortKey="Van Gool, L" uniqKey="Van Gool L">L. Van Gool</name>
</author>
<author>
<name sortKey="Vogels, R" uniqKey="Vogels R">R. Vogels</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vangeneugden, J" uniqKey="Vangeneugden J">J. Vangeneugden</name>
</author>
<author>
<name sortKey="Pollick, F" uniqKey="Pollick F">F. Pollick</name>
</author>
<author>
<name sortKey="Vogels, R" uniqKey="Vogels R">R. Vogels</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Webb, J A" uniqKey="Webb J">J. A. Webb</name>
</author>
<author>
<name sortKey="Aggarwal, J K" uniqKey="Aggarwal J">J. K. Aggarwal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiss, Y" uniqKey="Weiss Y">Y. Weiss</name>
</author>
<author>
<name sortKey="Adelson, E H" uniqKey="Adelson E">E. H. Adelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yuille, A L" uniqKey="Yuille A">A. L. Yuille</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bulthoff</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Hum Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Hum Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Human Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24605096</article-id>
<article-id pub-id-type="pmc">3932410</article-id>
<article-id pub-id-type="doi">10.3389/fnhum.2014.00091</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Bayesian integration of position and orientation cues in perception of biological and non-biological forms</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Thurman</surname>
<given-names>Steven M.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Lu</surname>
<given-names>Hongjing</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Department of Psychology, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Department of Statistics, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Harriet Brown, University College London, UK</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Francisco Barcelo, University of Illes Balears, Spain; Markus Lappe, Universität Münster, Germany</p>
</fn>
<corresp id="fn001">*Correspondence: Steven M. Thurman, Department of Psychology, University of California Los Angeles, 1282 Franz Hall, Los Angeles, CA 90095, USA e-mail:
<email xlink:type="simple">sthurman@ucla.edu</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to the journal Frontiers in Human Neuroscience.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>24</day>
<month>2</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>91</elocation-id>
<history>
<date date-type="received">
<day>16</day>
<month>12</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>06</day>
<month>2</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2014 Thurman and Lu.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.</p>
</abstract>
<kwd-group>
<kwd>visual perception</kwd>
<kwd>Bayesian model</kwd>
<kwd>biological motion</kwd>
<kwd>sensory integration</kwd>
<kwd>cue reliability</kwd>
<kwd>form analysis</kwd>
</kwd-group>
<counts>
<fig-count count="16"></fig-count>
<table-count count="2"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="70"></ref-count>
<page-count count="13"></page-count>
<word-count count="10096"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>The ability to analyze the shape and character of moving objects in the environment is essential for adaptive behavior in a dynamic visual world. Recognizing objects in most real-world settings poses a significant computational challenge to the human visual system due to factors that include signal fragmentation as a result of clutter and occlusion, uncertainty or conflict in sensory information, internal noise in sensory encoding, and ambiguity in the neural representation of an object's shape and its features. Since the world is by no means stationary, human vision must also deal with the fact that objects can undergo changes in shape, viewpoint, and position over time. These changes add yet more complexity and ambiguity to the problem of dynamic form perception. Despite advances on these notoriously difficult issues in work on computational models and video surveillance systems (Aggarwal and Nandhakumar,
<xref rid="B2" ref-type="bibr">1988</xref>
; Hu et al.,
<xref rid="B29" ref-type="bibr">2004</xref>
), no artificial vision system has approached the inherent capability of human vision in processing and understanding dynamic shapes and images.</p>
<p>In the environment, dynamic form can be broadly categorized as originating from either rigid non-biological shapes with a rigid style of motion (e.g., translating or rotating shapes), or from semi-rigid biological shapes with an articulating style of motion (e.g., human actions or biological motion; see Aggarwal et al.,
<xref rid="B1" ref-type="bibr">1998</xref>
). In the field of biological motion, there are currently two predominant computational approaches to understanding human action perception. One class of models is based on analysis of patterns of local image motion (Webb and Aggarwal,
<xref rid="B3" ref-type="bibr">1982</xref>
; Giese and Poggio,
<xref rid="B22" ref-type="bibr">2003</xref>
; Casile and Giese,
<xref rid="B14" ref-type="bibr">2005</xref>
), while another class of models is based on sequential static form information over time, or dynamic form analysis (Lange and Lappe,
<xref rid="B36" ref-type="bibr">2006</xref>
; Lange et al.,
<xref rid="B35" ref-type="bibr">2006</xref>
; Theusner et al.,
<xref rid="B56" ref-type="bibr">2014</xref>
). This dichotomy is rooted, in part, in the classical distinction between dorsal and ventral stream processing in the primate visual system (Ungerleider and Mishkin,
<xref rid="B63" ref-type="bibr">1982</xref>
; Goodale and Milner,
<xref rid="B23" ref-type="bibr">1992</xref>
). Recent evidence from behavioral (Atkinson et al.,
<xref rid="B6" ref-type="bibr">2007</xref>
; Thurman and Grossman,
<xref rid="B58" ref-type="bibr">2008</xref>
; Thurman et al.,
<xref rid="B57" ref-type="bibr">2010</xref>
; Thurman and Lu,
<xref rid="B59" ref-type="bibr">2013a</xref>
), neurophysiological (Vangeneugden et al.,
<xref rid="B68" ref-type="bibr">2009</xref>
,
<xref rid="B67" ref-type="bibr">2011</xref>
; Singer and Sheinberg,
<xref rid="B54" ref-type="bibr">2010</xref>
), and functional brain imaging studies (Jastorff and Orban,
<xref rid="B31" ref-type="bibr">2008</xref>
,
<xref rid="B30" ref-type="bibr">2009</xref>
; Jastorff et al.,
<xref rid="B32" ref-type="bibr">2012</xref>
; Thompson and Baccus,
<xref rid="B7" ref-type="bibr">2012</xref>
) is converging on the view that several mechanisms may be employed simultaneously, based on analysis and integration of both motion and form-based features, to support robust action recognition under varying conditions of environmental noise and sensory uncertainty (for review see Blake and Shiffrar,
<xref rid="B11" ref-type="bibr">2007</xref>
).</p>
<p>Evidence from neurophysiological (Oram and Perrett,
<xref rid="B46" ref-type="bibr">1994</xref>
; Puce and Perrett,
<xref rid="B50" ref-type="bibr">2003</xref>
; Vangeneugden et al.,
<xref rid="B67" ref-type="bibr">2011</xref>
) and functional brain imaging studies (Grossman et al.,
<xref rid="B24" ref-type="bibr">2000</xref>
,
<xref rid="B25" ref-type="bibr">2010</xref>
; Saygin,
<xref rid="B52" ref-type="bibr">2007</xref>
; Jastorff et al.,
<xref rid="B32" ref-type="bibr">2012</xref>
) further suggest that biological motion may be supported by distinct and specialized neural mechanisms in the human and primate brain. In terms of motion information, behavioral studies suggest the existence of specialized low-level filters for detecting and processing biological actions. For instance, Troje and Westhoff (
<xref rid="B62" ref-type="bibr">2006</xref>
) found evidence for a “life detection” mechanism that is purely motion-based and tuned specifically to characteristic features of terrestrial animals in locomotion (Troje and Westhoff,
<xref rid="B62" ref-type="bibr">2006</xref>
; Chang and Troje,
<xref rid="B15" ref-type="bibr">2010</xref>
). Recently, Thurman and Lu (
<xref rid="B60" ref-type="bibr">2013b</xref>
) also found evidence for a basic mechanism that is sensitive to the congruency between the direction of global body motion and the direction implied by intrinsic limb movements, presumably due to the inherent causal relationship between limb movements and whole body movements. Developmental studies have also shown that newborn chicks (Regolin et al.,
<xref rid="B51" ref-type="bibr">2000</xref>
) and human infants (Fox and McDaniel,
<xref rid="B21" ref-type="bibr">1982</xref>
; Simion et al.,
<xref rid="B53" ref-type="bibr">2008</xref>
) have an innate preference for biological motion, but little sensitivity to biological form information (Vallortigara et al.,
<xref rid="B64" ref-type="bibr">2005</xref>
; Bardi et al.,
<xref rid="B8" ref-type="bibr">2010</xref>
).</p>
<p>Hence, in contrast to motion, it remains unclear whether dynamic form analysis may be specialized for biological actions. From a modeling perspective, the form-based approach to biological motion is computationally analogous on a frame-by-frame basis to generic template-matching schemes employed for rigid shape perception (Liu et al.,
<xref rid="B41" ref-type="bibr">1995</xref>
; Levi et al.,
<xref rid="B39" ref-type="bibr">1999</xref>
). It is highly plausible that this aspect of biological action analysis may actually be supported by a general-purpose system for processing both rigid and non-rigid dynamic objects. Yet, few studies have focused directly on comparing perception of biological to non-biological stimuli (e.g., Neri et al.,
<xref rid="B45" ref-type="bibr">1998</xref>
; Hiris,
<xref rid="B28" ref-type="bibr">2007</xref>
), particularly in the specific context of form analysis.</p>
<p>The current study was designed to address two specific issues related to dynamic form perception. First, we sought to investigate whether perception of biological stimuli, as compared to rigid non-biological stimuli, is supported by general-purpose or specialized computational mechanisms of form-based visual processing. Secondly, we aimed to examine the relative contribution of two principal cues for visual form—spatial position and orientation—to dynamic form analysis. Previous studies have shown that when position and orientation cues provide conflicting information, they can compete to determine the perceptual appearance of static (Day and Loffler,
<xref rid="B18" ref-type="bibr">2009</xref>
) and dynamic objects (Thurman and Lu,
<xref rid="B59" ref-type="bibr">2013a</xref>
). However, the exact nature of the cue integration mechanism remains unclear, as well as the role that sensory uncertainty might play in the cue combination process. In the current study, we created two types of dynamic stimuli (rotating squares and biological motion walkers), and sparsely sampled random positions along the shape of each dynamic stimulus across time (e.g., Beintema and Lappe,
<xref rid="B10" ref-type="bibr">2002</xref>
), using Gabor patches that provided orientation signals that were either congruent or incongruent with the underlying sampled form. By systematically putting position and orientation information into conflict under varying conditions of sensory uncertainty, we sought to determine the principal rules governing form-based visual cue integration, and to test whether these computational principles apply generally to both biological and non-biological stimuli.</p>
<p>To preview the results, we discovered a characteristic trade-off in the dominance of position and orientation depending jointly on carrier spatial frequency, envelope size, and the number of sampled Gabor elements in the display. Specifically, the appearance of dynamic form was consistent with orientation cues when orientation reliability was high and/or position reliability was low, and vice versa. Importantly, we found no significant differences in the pattern of behavioral results between biological and non-biological stimuli, casting doubt on the notion that biological motion may be specialized in the human brain, at least in specific terms of form analysis. In order to explain individual behavioral data quantitatively, we developed a model of dynamic form analysis using the framework of Bayesian statistics and probabilistic sensory cue integration (for a review see Yuille and Bulthoff,
<xref rid="B70" ref-type="bibr">1996</xref>
). Bayesian probability theory offers a principled and rigorous method for optimal decision making under conditions with conflicting or uncertain information, and has found support in many aspects of visual perception, including object recognition (Liu et al.,
<xref rid="B41" ref-type="bibr">1995</xref>
), contour integration (Feldman,
<xref rid="B20" ref-type="bibr">2001</xref>
), motion perception (Weiss and Adelson,
<xref rid="B69" ref-type="bibr">1998</xref>
), depth perception (Landy et al.,
<xref rid="B34" ref-type="bibr">1995</xref>
), and multisensory cue integration (Ernst and Banks,
<xref rid="B19" ref-type="bibr">2002</xref>
; Alais and Burr,
<xref rid="B4" ref-type="bibr">2004</xref>
). Results of our model were highly consistent with individual subject data for both biological and non-biological tasks, supporting the hypothesis of a general-purpose mechanism for dynamic form analysis that integrates orientation and position information in a probabilistic and rational manner according to low-level sensory cue reliability.</p>
</sec>
<sec>
<title>Experiment 1</title>
<sec>
<title>Participants</title>
<p>Twenty participants (17 female, mean age = 20.8 ± 3.2 years) were recruited through the Department of Psychology subject pool at the University of California, Los Angeles (UCLA), and were given course credit for participation. All participants had normal or corrected vision, gave informed consent approved by the UCLA Institutional Review Board and were naïve to the purpose and stimuli used in the studies.</p>
</sec>
<sec>
<title>Materials and methods</title>
<p>All stimuli were created using Matlab (MathWorks Inc.) and the Psychophysics Toolbox (Brainard,
<xref rid="B12" ref-type="bibr">1997</xref>
; Pelli,
<xref rid="B47" ref-type="bibr">1997</xref>
) and were displayed on a calibrated monitor with a gray background (60 Hz, background luminance 16.2 c/m
<sup>2</sup>
) powered by a Dell PC running Windows XP. Experiments were conducted in a dark room with a chin rest to maintain a constant viewing distance (35 cm).</p>
<p>The biological motion pattern of human walking was obtained from the Carnegie Mellon Graphics Lab Motion Capture Database, available free online (
<ext-link ext-link-type="uri" xlink:href="http://mocap.cs.cmu.edu">http://mocap.cs.cmu.edu</ext-link>
). Software developed in our laboratory was used to convert the raw motion capture files to point-light format, with 11 points representing the head, mid-shoulder, elbows, wrists, knees, and feet (van Boxtel and Lu,
<xref rid="B66" ref-type="bibr">2013</xref>
). The horizontal translation component of movement was subtracted so that the animation appeared to walk in place as if on a treadmill, and was trimmed to one walking cycle consisting of 60 frames. The rotating non-biological motion stimulus comprised a sequence of square shape images that were rotated by increments of 6° per frame so that the animation went through one full rotation over the course of 60 frames. Leftward and rightward walkers were created from the same animation sequence by reflecting across the vertical meridian, and clockwise and counter-clockwise rotating square stimuli were created by playing the sequence in either forward or reverse order. Both the biological and non-biological stimuli were presented at a rate of 60 Hz and were equated in vertical size to subtend approximately 9° in height. Stimuli were presented for a duration of 1 s on each trial, so that the biological stimulus completed one full gait cycle (two steps), and the non-biological stimulus completed one full rotation cycle (360°).</p>
<p>In order to discover the mechanisms underlying dynamic form analysis, we limited the influence of local image motion on perceptual discriminations by creating dynamic stimuli using the one-frame limited-lifetime sampling technique (Beintema and Lappe,
<xref rid="B10" ref-type="bibr">2002</xref>
; Beintema et al.,
<xref rid="B9" ref-type="bibr">2006</xref>
). By randomly re-sampling a subset of points on every frame of the sequence, local motion information has been shown to be severely disrupted in this type of display, and visual analysis has been argued to proceed on the basis of sequential global form information (Lange and Lappe,
<xref rid="B36" ref-type="bibr">2006</xref>
). For the biological motion sequence, we first converted the point-light stimuli to a sequence of stick figures by connecting the points according to the anatomy of body structure. The stick figures contained nine limb segments representing the upper and lower arms and legs, as well as the upper torso (shoulders connected to head point). We varied the total number of elements that were randomly sampled per frame depending on stimulus type. For the biological animation sequence, we randomly selected 2, 4, or 6 different limb segments on each frame, and then chose a random position to sample from within the length of each selected limb segment (Figure
<xref ref-type="fig" rid="F1">1A</xref>
). For the non-biological motion sequence, we randomly selected 4, 6, or 8 elements from among the four edge segments comprising the rigid square shape on each frame (Figure
<xref ref-type="fig" rid="F1">1B</xref>
).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Schematic of stimulus construction for biological (A) and non-biological (B) stimuli</bold>
. The left of each panel shows a single static frame from the animation sequence and an example of 6 random spatial samples denoted by dotted circles. The middle panels show the extraction of orientation from the nearest line segment of the stick figure for a congruent (top) and incongruent (middle) stimulus, as well as a random orientation stimulus (bottom). The right panels show an example of what the stimulus would look like to the observer. Note that the spatial positions of the elements are identical across the three different orientation conditions. Dynamic stimuli in the experiment were created by performing this random sampling procedure independently on each frame of the animation sequence. For video demonstration of incongruent stimuli, see Supplemental Videos
<xref ref-type="supplementary-material" rid="SM1">1</xref>
,
<xref ref-type="supplementary-material" rid="SM2">2</xref>
.</p>
</caption>
<graphic xlink:href="fnhum-08-00091-g0001"></graphic>
</fig>
<p>In addition to the locations of the samples, we kept track of the orientation of the limb or edge segment from which each point was sampled, and also calculated the orientation of the nearest limb segment from the corresponding frame of the stimulus with the opposite movement direction. For instance, if the front lower leg was sampled from a rightward walking stimulus on the first frame, then we would extract the orientation of the back lower leg of the leftward walking stimulus on that same frame (Figure
<xref ref-type="fig" rid="F1">1A</xref>
). Similarly, for a clockwise rotating animation of non-biological motion, we extracted the orientation of the nearest edge segment on a counter-clockwise rotating stimulus (Figure
<xref ref-type="fig" rid="F1">1B</xref>
). Depending on stimulus condition, we manipulated orientation information to be either congruent with the underlying spatially-sampled stimulus, incongruent (i.e., consistent with opposite moving stimulus), or randomized. When randomized, we applied a random offset to the orientation of each element independently, drawn uniformly between 0° and 180°.</p>
<p>In contrast to previous studies that investigated dynamic form analysis in biological motion using broadband positional tokens (e.g., dots; Beintema and Lappe,
<xref rid="B10" ref-type="bibr">2002</xref>
), we used narrowband Gabor disks that were capable of dually representing both the position and orientation of sampled regions along the shape of each stimulus (Figure
<xref ref-type="fig" rid="F1">1</xref>
). Similar stimuli with multiple Gabor elements have been used in previous research to examine global motion perception (Amano et al.,
<xref rid="B5" ref-type="bibr">2009</xref>
; Lee and Lu,
<xref rid="B37" ref-type="bibr">2010</xref>
) and biological motion perception (Lu,
<xref rid="B43" ref-type="bibr">2010</xref>
). Gabor patches are well-suited to study these visual processes because they are well-matched to the band pass filtering properties of early visual cortex in terms of spatial frequency, orientation, and spatial scale. All Gabor disks had a fixed phase (sine) and the same suprathreshold level of local contrast (33%). In Experiment 1, the spatial extent of Gabor disks, represented by the standard deviation of the Gaussian envelope, was set to 0.84° visual angle. The only parameter of the Gabor disks besides orientation that changed from trial to trial was the carrier spatial frequency, which allowed us to manipulate the reliability of orientation information on each trial (Burr and Wijesundra,
<xref rid="B13" ref-type="bibr">1991</xref>
; Day and Loffler,
<xref rid="B18" ref-type="bibr">2009</xref>
). Decreasing carrier spatial frequency within a small fixed-size envelope (e.g., Gabor) causes a concomitant increase in orientation bandwidth, which has the effect of increasing perceptual uncertainty about the true orientation of the grating (Dakin et al.,
<xref rid="B16" ref-type="bibr">1999</xref>
).</p>
<p>Participants were assigned to one of two groups that reported either the walking direction (leftward, rightward) of biological stimuli (
<italic>n</italic>
= 10), or the rotation direction (clockwise, counter-clockwise) of non-biological square stimuli (
<italic>n</italic>
= 10). Subjects indicated their responses with the left and right arrow keys on a keyboard. The experiment had a 3 × 3 × 3 within-subjects design with 3 orientation conditions (congruent, incongruent, random), 3 spatial frequencies (0.25, 0.75, 1.25 cyc/°), and 3 numbers of sampled elements per frame (2, 4, or 6 for biological walkers; 4, 6, or 8 for non-biological rotating squares). Stimulus type (biological vs. non-biological) also served as a between-subjects factor. All trial types were balanced and randomly intermixed in two blocks of 162 trials, resulting in 36 trials per condition and lasting less than 1 h in total duration.</p>
</sec>
<sec>
<title>Results</title>
<p>For each experimental condition, we computed the proportion of trials in which observers reported perceiving the stimulus movement direction consistent with position cues. Hence, values closer to zero in the incongruent cues condition would indicate a reversal of appearance away from positional cues and toward the stimulus movement direction defined by orientation cues. Mean group data from the biological and non-biological conditions is displayed in Figure
<xref ref-type="fig" rid="F2">2</xref>
. We performed a 3 × 3 × 3 repeated measures ANOVA, with task (biological vs. non-biological) serving as a between-subjects factor. There was a significant main effect of orientation,
<italic>F</italic>
<sub>(2, 36)</sub>
= 356.2,
<italic>p</italic>
< 0.001, due primarily to the fact that incongruent orientation caused a significant drop in the proportion of responses consistent with positional cues. Overall, randomizing orientation appeared to have only a small impact on discrimination performance, indicating that observers could effectively ignore noisy orientation cues to perceive the dynamic stimulus on the basis of position cues. Importantly, the strength of the perceptual reversal effect in the incongruent cues condition was modulated by other stimulus parameters. Orientation had a stronger influence on perception as spatial frequency increased,
<italic>F</italic>
<sub>(2, 36)</sub>
= 138.5,
<italic>p</italic>
< 0.001, and also had a stronger influence when there were fewer sampled elements in the display,
<italic>F</italic>
<sub>(2, 36)</sub>
= 236.9,
<italic>p</italic>
< 0.001. The effect of spatial frequency was likely due to the fact that orientation was more reliable and apparent as spatial frequency increased (Burr and Wijesundra,
<xref rid="B13" ref-type="bibr">1991</xref>
). The effect of the number of sampled elements suggests that orientation also tended to dominate when there was increased ambiguity and uncertainty about the structure of the underlying stimulus (Day and Loffler,
<xref rid="B18" ref-type="bibr">2009</xref>
).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Mean group data from Experiment 1 for biological (A) and non-biological (B) stimuli</bold>
. The panels from left to right show the data from conditions with low, medium, and high spatial frequency elements, respectively. The x-axis represents the three conditions with varying numbers of sampled points per frame. The y-axis shows the proportion of responses that were consistent with the stimulus movement direction defined by the positions of the spatial samples. Values closer to zero in the incongruent cues condition (open squares) indicate perceptual reversals of stimulus movement direction consistent with orientation cues instead of position cues. Error bars represent SEM.</p>
</caption>
<graphic xlink:href="fnhum-08-00091-g0002"></graphic>
</fig>
<p>Comparing performance in the key incongruent orientation condition between biological and non-biological stimuli, we found that stimulus type was a non-significant between subjects factor,
<italic>F</italic>
<sub>(1, 18)</sub>
= 3.5,
<italic>p</italic>
= 0.075. This result indicates that the pattern of performance was not significantly different between the biological and non-biological stimulus conditions. This finding supports the hypothesis that a general-purpose mechanism of dynamic form analysis may underlie performance for both dynamic stimulus types, regardless of the complexity of movement or the biological nature of the visual stimulus. In other words, performance is so well-matched between these two disparate stimulus types in terms of spatial frequency, orientation, and the number of sampled elements, that the simplest and most likely explanation is a common computational mechanism that is not specialized for either biological or non-biological motion.</p>
</sec>
</sec>
<sec>
<title>Experiment 2</title>
<p>The results of Experiment 1 show that orientation reliability played a critical role in determining the degree to which orientation influenced the perceptual appearance of dynamic biological and non-biological stimuli. Given that position and orientation cues appear to directly compete with each other in this process, we aimed to examine how changes in the reliability of position cues would interact with changes in orientation reliability to influence global dynamic form perception. Previous research has shown that position discrimination performance is more reliable for small as compared to large elements (Toet and Koenderink,
<xref rid="B61" ref-type="bibr">1988</xref>
), and is independent of carrier spatial frequency (Hess and Holliday,
<xref rid="B27" ref-type="bibr">1992</xref>
; Day and Loffler,
<xref rid="B18" ref-type="bibr">2009</xref>
). Hence in Experiment 2, in addition to manipulating spatial frequency across trials, we manipulated the spatial extent of Gabor elements in our displays by varying the standard deviation of the Gaussian envelope. If orientation and position truly compete based on relative cue reliability, we would expect to find systematic trade-offs in the dominance of position and orientation cues as a function of both spatial frequency and Gabor size.</p>
<p>Furthermore, for each participant we quantified the extent to which orientation reliability changed as a function of spatial frequency, and how position reliability changed as a function of Gabor size using two types of low-level feature discrimination task. In the basic orientation discrimination task, participants determined whether a briefly-flashed Gabor disk was rotated either clockwise or counterclockwise from vertical with varying levels of offset in terms of element orientation. In the basic position discrimination task, participants determined whether a briefly-flashed Gabor disk was positioned to the left or right of two flanking Gabor patches located above and below the central test patch with varying levels of offset in terms of element position. We varied spatial frequency and Gabor size across trials and modeled individual subject data with psychometric functions in order to derive empirical estimates of subjective cue reliability.</p>
<sec>
<title>Participants</title>
<p>Seven participants (5 female, mean age = 22.6 ± 2.7 years) were recruited through the UCLA Department of Psychology subject pool and given course credit or payment for participation. All participants had normal or corrected vision, gave informed consent approved by the UCLA Institutional Review Board and were naïve to the purpose and stimuli used in the studies.</p>
</sec>
<sec>
<title>Materials and methods</title>
<p>Stimulus construction and display methods were generally the same as in Experiment 1 with a few notable exceptions. Each subject performed two blocks of trials discriminating the walking direction of biological stimuli and two blocks of trials discriminating the rotation direction of non-biological stimuli. All trials presented in these blocks were of the type with incongruent position and orientation cues, while two independent features of the Gabor elements were manipulated. In contrast to Experiment 1 in which Gabor size was fixed, we varied the spatial extent of the Gabor envelope (0.42° or 0.84° SD). In the condition with 0.84° spatial extent, spatial frequency was low, med-low, med-high, and high (0.2, 0.4, 0.8, or 1.6 cyc/°, respectively). In order to equate orientation bandwidth across the two envelope size conditions, we doubled the levels of spatial frequency in the condition with the half-sized (0.42° SD) Gabor envelope (0.4, 0.8, 1.6, or 3.2 cyc/°, respectively). By equating orientation bandwidth, we maintained the reliability of orientation information across the two envelope size conditions (Daugman,
<xref rid="B17" ref-type="bibr">1985</xref>
).</p>
<p>All Gabor elements had a fixed phase (sine) and a suprathreshold local contrast level of 33%. For each block of trials, envelope size was fixed while spatial frequency varied randomly across trials. The number of elements sampled per frame was also varied randomly across trials depending on stimulus type (2, 4, 6 for walker; 4, 6, 8 for square shape), analogous to Experiment 1. Each block comprised 386 trials, corresponding to 28 trials per condition. The order of completion of conditions was counterbalanced across participants. The experiment had a 4 × 3 × 2 × 2 within-subjects design with 4 spatial frequencies, 3 numbers of sampled elements per frame, 2 Gabor sizes, and 2 stimulus types (biological or non-biological).</p>
<p>In addition, participants performed two types of lower-level feature discrimination tasks. In the orientation discrimination task, two reference Gabor elements with vertical orientation were place 12° above and below a central fixation cross. Gabor elements had a fixed size of 0.84° standard deviation, and spatial frequency was varied across the same four levels as the dynamic shape discrimination tasks (0.2, 0.4, 0.8, or 1.6 cyc/°). On each trial, the fixation cross disappeared and a test Gabor patch was flashed centrally for 17 ms. The test patch had a vertical orientation plus a random offset. The range of offsets varied across eight levels between −20° and 20° depending on the particular spatial frequency condition, in order to sample to entire range of the psychometric function. Participants reported whether the test patch was perceived to be rotated clockwise or counterclockwise relative to the vertical reference patches using the left and right arrow keys on the keyboard. In total, participants completed 768 trials, resulting in 24 trials per condition (4 spatial frequencies and 8 orientation offsets). Cumulative Gaussian functions were used to fit individual subject data, and 1/slope of the psychometric curves served as estimates of orientation cue reliability for each level of spatial frequency.</p>
<p>In the position discrimination task, we varied the standard deviation of the Gabor envelope (0.42° or 0.84°), but fixed spatial frequency (0.4° or 0.8 cyc/°, respectively), across two blocks of trials. Two reference Gabor elements with vertical orientation were place above and below a central fixation cross. The distance between each reference Gabor and the central cross varied depending on size condition (6° or 12°, respectively). On each trial, the fixation cross disappeared and a test Gabor patch was flashed centrally for 17 ms with a random horizontal offset in terms of spatial position. The range of offsets varied across eight levels between −0.67° and 0.67° visual angle. Participants reported whether the test patch was perceived to be located to the left or right of the reference patches using the left and right arrow keys on the keyboard. The orientation of the test patch was orthogonal to the vertical reference patches to avoid the potential use of orientation cues, such as Vernier acuity, for the position discrimination task (Hess and Holliday,
<xref rid="B27" ref-type="bibr">1992</xref>
). In total, participants completed 512 trials, resulting in 32 trials per condition (2 Gabor sizes, 8 spatial offsets). Cumulative Gaussian functions were fit to individual subject data, and 1/slope of the psychometric curves served as estimates of positional cue reliability for each level of envelope size.</p>
</sec>
<sec>
<title>Results</title>
<p>Mean group results from Experiment 2 are displayed in Figure
<xref ref-type="fig" rid="F3">3</xref>
. A 4 × 3 × 2 × 2 within subjects ANOVA revealed several significant results. Replicating the results of Experiment 1, there was an increasing influence of orientation as spatial frequency increased,
<italic>F</italic>
<sub>(3, 18)</sub>
= 136.5,
<italic>p</italic>
< 0.001, as well as an increasing influence of orientation as the number of sampled elements decreased,
<italic>F</italic>
<sub>(2, 12)</sub>
= 76.5,
<italic>p</italic>
< 0.001. Also consistent with the results of Experiment 1, stimulus type was a non-significant factor, as performance between the biological and non-biological tasks was very similar,
<italic>F</italic>
<sub>(1, 6)</sub>
= 3.2,
<italic>p</italic>
= 0.12. We also found that Gabor size had a significant influence on the relative influence of positional cues,
<italic>F</italic>
<sub>(1, 6)</sub>
= 79.5,
<italic>p</italic>
< 0.001. Specifically, there was an increasing influence of positional cues as Gabor size decreased, while (importantly) orientation bandwidth was held constant across the two size conditions. The systematic pattern of changes in the appearance of biological and non-biological dynamic form as a function of both spatial frequency and Gabor size underscores two key points about the global unit formation process in dynamic form analysis. First, information about element position clearly competes with information about element orientation to determine the perceived global stimulus shape. Secondly, changes in the relative influence of position and orientation on behavioral performance appear to be driven by changes in the subjective reliability, or uncertainty, of the low-level visual cues.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Mean group behavioral data from Experiment 2 for biological (A) and non-biological (B) stimuli</bold>
. The panels from left to right show the data from conditions with larger and smaller elements, respectively, as indicated by the standard deviation of the Gaussian envelope (0.84° or 0.42°). Different spatial frequency conditions are indicated by grayscale squares (see text for SF values in the low, med-low, med-high, and high conditions). The y-axis shows the proportion of responses that were consistent with the stimulus movement direction defined by the positions of the elements. Since all trials were of the type with incongruent orientation and position cues, values closer to zero indicate perceptual reversals of stimulus movement direction consistent with orientation cues. Error bars represent SEM.</p>
</caption>
<graphic xlink:href="fnhum-08-00091-g0003"></graphic>
</fig>
<p>In support of this claim, we measured performance in a basic position discrimination task and a basic orientation discrimination task using the same levels of spatial frequency and Gabor size as in the main experiment and within the same group of subjects. Psychometric curves and mean group estimates of cue reliability (1/slope) are displayed in Figure
<xref ref-type="fig" rid="F4">4</xref>
. Individual slope estimates for each condition are shown in Table
<xref ref-type="table" rid="T1">1</xref>
. There was a clear monotonic increase in the precision with which observers could discriminate the orientation of a single Gabor disk as a function of carrier spatial frequency (Figure
<xref ref-type="fig" rid="F4">4A</xref>
). Similarly, the precision with which observers could discriminate the position of a single Gabor disk increased with decreasing Gabor size (Figure
<xref ref-type="fig" rid="F4">4B</xref>
). While these results provide qualitative evidence to support our hypothesis, they also provide empirically-derived quantitative estimates of cue reliability for each subject. Hence, the next goal of the current study was to develop a model of dynamic form analysis in the framework of Bayesian statistics and probabilistic cue combination in order to explain individual subject data on the basis of low-level cue reliability.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Mean group data from Experiment 2 for the low-level orientation (A) and position (B) discrimination tasks</bold>
. The left panels show example stimuli from the experiment. The middle panels show mean group data for each condition of spatial frequency as a function of orientation offset
<bold>(A)</bold>
, or each condition of Gabor size as a function of position offset
<bold>(B)</bold>
. The data are fit with cumulative Gaussian psychometric curves. The right panels show mean group estimates of cue reliability, derived from 1/slope of the psychometric fits. Error bars represent SEM.</p>
</caption>
<graphic xlink:href="fnhum-08-00091-g0004"></graphic>
</fig>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Overview of slope parameter estimates for each subject from the low-level tasks in Experiment 2</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Observer</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Low SF</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Med-low SF</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Med-high SF</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>High SF</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Large size</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Small size</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="char" char="." rowspan="1" colspan="1">8.9</td>
<td align="char" char="." rowspan="1" colspan="1">4.45</td>
<td align="char" char="." rowspan="1" colspan="1">2.36</td>
<td align="char" char="." rowspan="1" colspan="1">1.82</td>
<td align="char" char="." rowspan="1" colspan="1">4.41</td>
<td align="char" char="." rowspan="1" colspan="1">3.87</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="char" char="." rowspan="1" colspan="1">12.98</td>
<td align="char" char="." rowspan="1" colspan="1">8.29</td>
<td align="char" char="." rowspan="1" colspan="1">4.95</td>
<td align="char" char="." rowspan="1" colspan="1">3.68</td>
<td align="char" char="." rowspan="1" colspan="1">4.49</td>
<td align="char" char="." rowspan="1" colspan="1">3.84</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="char" char="." rowspan="1" colspan="1">12.22</td>
<td align="char" char="." rowspan="1" colspan="1">6.2</td>
<td align="char" char="." rowspan="1" colspan="1">3.34</td>
<td align="char" char="." rowspan="1" colspan="1">2.27</td>
<td align="char" char="." rowspan="1" colspan="1">8.49</td>
<td align="char" char="." rowspan="1" colspan="1">4.86</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="char" char="." rowspan="1" colspan="1">11.82</td>
<td align="char" char="." rowspan="1" colspan="1">9.59</td>
<td align="char" char="." rowspan="1" colspan="1">4.88</td>
<td align="char" char="." rowspan="1" colspan="1">2.62</td>
<td align="char" char="." rowspan="1" colspan="1">2.45</td>
<td align="char" char="." rowspan="1" colspan="1">2.08</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="char" char="." rowspan="1" colspan="1">7.34</td>
<td align="char" char="." rowspan="1" colspan="1">4.71</td>
<td align="char" char="." rowspan="1" colspan="1">2.2</td>
<td align="char" char="." rowspan="1" colspan="1">1.73</td>
<td align="char" char="." rowspan="1" colspan="1">2.44</td>
<td align="char" char="." rowspan="1" colspan="1">1.63</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="char" char="." rowspan="1" colspan="1">5.59</td>
<td align="char" char="." rowspan="1" colspan="1">4.33</td>
<td align="char" char="." rowspan="1" colspan="1">3.08</td>
<td align="char" char="." rowspan="1" colspan="1">1.75</td>
<td align="char" char="." rowspan="1" colspan="1">3.92</td>
<td align="char" char="." rowspan="1" colspan="1">3.04</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="char" char="." rowspan="1" colspan="1">5.28</td>
<td align="char" char="." rowspan="1" colspan="1">3.57</td>
<td align="char" char="." rowspan="1" colspan="1">2.09</td>
<td align="char" char="." rowspan="1" colspan="1">2.02</td>
<td align="char" char="." rowspan="1" colspan="1">3.73</td>
<td align="char" char="." rowspan="1" colspan="1">2.65</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>The first four columns on the left show slope (σ θ) values associated with orientation discrimination performance for four spatial frequencies (0.2, 0.4, 0.8, 1.6 cyc/°, respectively). The last two columns show slope (σ p) values associated with position discrimination performance for two Gabor sizes (0.84°, 0.42° SD, respectively).</p>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec>
<title>Model</title>
<p>The present findings pose challenges to both types of current computational models of biological motion, due to the lack of computational mechanisms in these models for explicitly processing both position and orientation information. For example, in the form-based model proposed by Lange and Lappe (
<xref rid="B36" ref-type="bibr">2006</xref>
) and recently updated by Theusner et al. (
<xref rid="B56" ref-type="bibr">2014</xref>
), template-matching is based exclusively on the comparison between observed element positions and stored templates, effectively ignoring the local shape and other characteristics of the elements themselves. This limitation arises from the fact that template-matching is computed through a Euclidian distance metric, comparing the location of perceived elements to locations in stored global body posture templates. Since these models lack an explicit mechanism for processing based on element orientations, as well as an essential pooling mechanism for integrating information about position and orientation, the models would likely fail to predict perceptual reversals for incongruent cues. Moreover, they would certainly fail to account for systematic differences in perception as a function of cue reliability (e.g., changes in spatial frequency and Gabor size).</p>
<p>At the same time, the motion pathway of the computational model developed by Giese and Poggio (
<xref rid="B22" ref-type="bibr">2003</xref>
) would also have difficulty processing the stimuli developed in our study, due to the lack of reliable local image motion information resulting from the sparse random sampling procedure. The form pathway in Giese and Poggio's model does incorporate information about element orientation by virtue of having a dense layer of Gabor filters that serve as the first level of spatial analysis. However, their model lacks a second-order mechanism capable of spatial processing that is invariant to element orientation (e.g., “position label” detector), as well as a mechanism for integrating information based on element position and orientation. In contrast to Lange and Lappe's model, the Giese and Poggio model relies too much on orientation, and predicts perceptual reversals on most trials. We have run preliminary simulations for each of these models in our laboratory using stimuli from the current experiments, and we have confirmed their predictions.</p>
<p>Here we present a computational framework of dynamic form analysis that affords several important advances relative to previous models of biological motion recognition, and that has the capacity to generalize to dynamic form analysis for other non-biological stimuli. The framework is built on two modules for processing different visual cues—position and orientation. The position module performs frame-by-frame template-matching based exclusively on the “position labels” of elements in the display irrespective of orientation. Consequently, processing in this module is similar to the model proposed by Lange and Lappe (
<xref rid="B36" ref-type="bibr">2006</xref>
). The orientation module performs frame-by-frame template-matching on the basis of orientation at each element position. As such, the orientation module is sensitive to both the orientation and relative position of sampled elements in the visual display.</p>
<p>We implemented local Bayesian models for each individual module specialized for position and orientation cues, respectively, and developed an integration operator to combine selections from individual modules with consideration of the reliability of each module. We will first review the local Bayesian models for each module, followed by the integration operator for analyzing the biological motion stimulus, and then discuss how to extend the same model to identify rotation direction of the square stimulus.</p>
<p>The model first assumes that the dynamic event sequence is represented as a set of probabilistic shape templates associated with uncertainty. As illustrated in Figure
<xref ref-type="fig" rid="F5">5</xref>
, the position templates follow a normal distribution,
<italic>T</italic>
<sub>
<italic>p</italic>
</sub>
~
<italic>N</italic>
(
<italic>C</italic>
<sub>
<italic>p</italic>
</sub>
, σ
<sup>2</sup>
<sub>Tp</sub>
). The means
<italic>C</italic>
<sub>
<italic>p</italic>
</sub>
were determined by locations in critical frames, including 8 equidistant frames from the leftward and rightward walking sequences, and 8 equidistant frames from the square rotation sequence, which served as stored templates. The variance σ
<sup>2</sup>
<sub>Tp</sub>
was determined by the maximum value of closest distances between two neighboring template frames. We found that σ
<sup>2</sup>
<sub>Tp</sub>
was 8 pixels for walker stimuli, and 13 pixels for rotating square stimuli. The orientation template distributions were defined in a similar way, so that they follow a normal distribution,
<italic>T</italic>
<sub>θ</sub>
~
<italic>N</italic>
(
<italic>C</italic>
<sub>θ</sub>
, σ
<sup>2</sup>
<sub>
<italic>T</italic>
θ</sub>
). The means
<italic>C</italic>
<sub>θ</sub>
indicated the orientation of a given point in the critical template frames, and the variances σ
<sup>2</sup>
<sub>
<italic>T</italic>
θ</sub>
were determined by the maximum value of orientation changes of corresponding limbs between neighboring templates. We found that σ
<sup>2</sup>
<sub>
<italic>T</italic>
θ</sub>
was 11° for walker stimuli, and 12° for rotating square stimuli. These probabilistic distributions of internal templates reflect the distinctiveness of each presumed template frame, determined by the variability of encoding key postures in biological motion or critical frames in object movements. We tested the model using varying numbers of critical frames in templates between 4 and 16 per animation sequence, and found that model results were robust.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Schematic depiction of probabilistic shape templates for biological walking stimuli (top), and rotating square stimuli (bottom)</bold>
. The dashed square box highlights the probabilistic nature of the templates in terms of orientation information for each edge or limb segment. The mathematical terms are defined in the text.</p>
</caption>
<graphic xlink:href="fnhum-08-00091-g0005"></graphic>
</fig>
<p>For the position module,
<graphic xlink:href="fnhum-08-00091-i0007.jpg" position="float"></graphic>
<sub>
<italic>p</italic>
</sub>
, the recognition of the stimulus with sparsely sampled elements is based on the posterior probability of walking direction (i.e., L indicating the left walking direction) conditional on perceived locations
<bold>
<italic>x</italic>
</bold>
<sub>
<italic>p</italic>
</sub>
.
<graphic xlink:href="fnhum-08-00091-i0001.jpg" position="float"></graphic>
</p>
<p>To compute the posterior probability, a deterministic matching between the perceived locations and template locations was assumed to follow a Dirac delta distribution,
<italic>P</italic>
(
<bold>
<italic>x</italic>
</bold>
<sub>
<italic>p</italic>
</sub>
|
<italic>L</italic>
,
<italic>T</italic>
<sub>
<italic>p</italic>
</sub>
,
<graphic xlink:href="fnhum-08-00091-i0007.jpg" position="float"></graphic>
<sub>
<italic>p</italic>
</sub>
) = δ (
<bold>
<italic>x</italic>
</bold>
<sub>
<italic>p</italic>
</sub>
<italic>T</italic>
<sub>
<italic>p</italic>
</sub>
), with priors on probabilistic templates following a normal distribution with the mean of leftward walker,
<italic>T</italic>
<sub>
<italic>p</italic>
</sub>
|
<italic>L</italic>
~
<italic>N</italic>
(
<italic>C</italic>
<sup>
<italic>L</italic>
</sup>
<sub>
<italic>p</italic>
</sub>
, σ
<sup>2</sup>
<sub>
<italic>Tp</italic>
</sub>
) and prior probability of leftward walking direction
<italic>P</italic>
(
<italic>L</italic>
) = 0.5. Hence, the probability of determining leftward walking direction by the position module alone can be derived as:
<graphic xlink:href="fnhum-08-00091-i0002.jpg" position="float"></graphic>
</p>
<p>A similar computation can be derived for the orientation module,
<graphic xlink:href="fnhum-08-00091-i0007.jpg" position="float"></graphic>
<sub>θ</sub>
, based on Bayes rule:
<graphic xlink:href="fnhum-08-00091-i0003.jpg" position="float"></graphic>
</p>
<p>To combine the decisions from the two individual models for form analysis, Bayesian model averaging is used to take into consideration the uncertainty inherent in processing the sensory information within each module. The integrated decision is based on the weighted sum of posterior probabilities calculated from position and orientation modules:
<graphic xlink:href="fnhum-08-00091-i0004.jpg" position="float"></graphic>
</p>
<p>The weights are determined by the sensory noise inherent in individual modules and prior biases to favor position cues relative to orientation cues. Bayes rule is applied to assess the model evidence for each module:w
<graphic xlink:href="fnhum-08-00091-i0005.jpg" position="float"></graphic>
</p>
<p>We assume that the variability in perceiving locations and orientation information (i.e., cue reliability) determines the likelihood terms for the two modules as
<graphic xlink:href="fnhum-08-00091-i0008.jpg" position="float"></graphic>
and
<graphic xlink:href="fnhum-08-00091-i0009.jpg" position="float"></graphic>
. The cue reliability, σ
<sub>
<italic>p</italic>
</sub>
and σ
<sub>θ</sub>
, can be measured using low-level tasks for each individual subject, as demonstrated in Figure
<xref ref-type="fig" rid="F4">4</xref>
and Table
<xref ref-type="table" rid="T1">1</xref>
. The ratio of prior probability of the two modules is the only free parameter in the model simulation, expressed as
<graphic xlink:href="fnhum-08-00091-i0010.jpg" position="float"></graphic>
. Hence the integrated decision from the position and orientation modules can be expressed as:
<graphic xlink:href="fnhum-08-00091-i0006.jpg" position="float"></graphic>
</p>
<p>For individual human observers, we used their psychometric performance in the two low-level tasks to measure cue reliability σ
<sub>
<italic>p</italic>
</sub>
and σ
<sub>θ</sub>
, and then fit the module bias (a single α value) to minimize the discrepancy between model predictions and human performance across 48 experimental conditions.</p>
<p>Since this analysis was performed on a frame-by-frame basis and the stimuli are inherently dynamic, the final stage of the model must integrate the posterior probabilities across time to produce a decision about walking direction. On each stimulus frame we computed the maximum a posteriori (MAP) estimate of the best matching template from among the 8 templates representing each walking direction, and summed the MAP estimates across frames. The decision criterion of the model was to choose the walking direction with the greatest aggregate posterior probability across time.</p>
<p>Finally, applying the same computations to the square rotation task was straightforward with one modification at the decision stage. Because clockwise/counter-clockwise rotating squares involve the same set of template images, but with different temporal orders, the square rotation direction is defined by a specific sequence of frames. Accordingly, the decision stage must implement a mechanism for sequence selectivity during temporal integration. To achieve sequence selectivity, a temporal weighting operation was introduced as follows. First, using the max operator, the model determined the index of the template with the highest posterior probability across all templates within each set of clockwise and counter-clockwise rotating square templates. Next, a sequential matching score was computed by subtracting the max index of the previous frame from the current frame. If sequences are in the correct order, the sequential matching score should be 1 (or close to 1) frame, while deviations from 1 indicate poor sequential matching. Hence, a Gaussian weighting function was used in the form of
<italic>W</italic>
<sub>
<italic>s</italic>
</sub>
~
<italic>N</italic>
(1, σ
<sup>2</sup>
<sub>
<italic>s</italic>
</sub>
), which was centered at 1 frame to penalize sequences that were out of order by giving lower weight to frame sequences that were not ascending consecutively. The sigma of the Gaussian weighting function determined the specificity of sequence selectivity and was estimated to fit group level data (σ
<sup>2</sup>
<sub>
<italic>s</italic>
</sub>
= 3.6). To produce a final decision for the square rotation task, the maximum posterior probability from within each set of templates was multiplied by the appropriate weight on each frame and then summed across frames. The model chose the rotation direction with greatest aggregate weighted probability across time.</p>
<p>To model behavioral data for each observer, we ran 100 simulated trials for each experimental condition (48 total conditions). The empirically derived measures of cue reliability, σ
<sub>
<italic>p</italic>
</sub>
and σ
<sub>θ</sub>
, varied as a function of spatial frequency and Gabor size, and determined the relative weights associated with the orientation and position modules. To estimate the bias parameter, α, for each subject we used least-squares regression. This was the only parameter that was fit to individual subject data, while the relative performance of the model for each condition was determined solely by individual empirical estimates of cue reliability.</p>
<sec>
<title>Model results</title>
<p>Table
<xref ref-type="table" rid="T2">2</xref>
shows the reliability measure (root mean squared errors, RMS) and fitted module bias for 7 participants. Figure
<xref ref-type="fig" rid="F6">6</xref>
shows the group-averaged model results to compare directly with the human data in Figure
<xref ref-type="fig" rid="F3">3</xref>
. The model provides good fits to the human data, with an overall correlation across all tasks and observers of
<italic>r</italic>
<sub>(335)</sub>
= 0.90,
<italic>p</italic>
< 0.001;
<italic>RMS</italic>
= 0.095.</p>
<table-wrap id="T2" position="float">
<label>Table 2</label>
<caption>
<p>
<bold>Overview of the fitted bias parameter and root mean squared errors (RMS) fit between human and model data for each of 7 subjects</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Observer</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Bias parameter (α)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>RMS biological task</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>RMS non-biological task</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="char" char="." rowspan="1" colspan="1">5.25</td>
<td align="char" char="." rowspan="1" colspan="1">0.120</td>
<td align="char" char="." rowspan="1" colspan="1">0.126</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="char" char="." rowspan="1" colspan="1">4.25</td>
<td align="char" char="." rowspan="1" colspan="1">0.098</td>
<td align="char" char="." rowspan="1" colspan="1">0.104</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="char" char="." rowspan="1" colspan="1">8.0</td>
<td align="char" char="." rowspan="1" colspan="1">0.087</td>
<td align="char" char="." rowspan="1" colspan="1">0.095</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="char" char="." rowspan="1" colspan="1">8.5</td>
<td align="char" char="." rowspan="1" colspan="1">0.064</td>
<td align="char" char="." rowspan="1" colspan="1">0.082</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="char" char="." rowspan="1" colspan="1">4.25</td>
<td align="char" char="." rowspan="1" colspan="1">0.071</td>
<td align="char" char="." rowspan="1" colspan="1">0.076</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="char" char="." rowspan="1" colspan="1">7.75</td>
<td align="char" char="." rowspan="1" colspan="1">0.094</td>
<td align="char" char="." rowspan="1" colspan="1">0.106</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="char" char="." rowspan="1" colspan="1">6.25</td>
<td align="char" char="." rowspan="1" colspan="1">0.143</td>
<td align="char" char="." rowspan="1" colspan="1">0.072</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Group-averaged results of model simulations for biological (A) and non-biological (B) stimuli</bold>
. The panels from left to right show the data from conditions with larger and smaller elements, respectively, as indicated by the standard deviation of the Gaussian envelope (0.84° or 0.42°). Different spatial frequency conditions are indicated by colored circles (see Experiment 2: Methods for SF values in the low, med-low, med-high, and high conditions). The y-axis shows the proportion of responses that were consistent with the stimulus movement direction defined by the positions of the elements. Error bars represent SEM.</p>
</caption>
<graphic xlink:href="fnhum-08-00091-g0006"></graphic>
</fig>
</sec>
</sec>
<sec sec-type="discussion" id="s2">
<title>Discussion</title>
<p>The current study documents several significant findings related to dynamic form analysis in the human visual system. First, using the limited lifetime sampling technique to weaken the usefulness of local image motion information and to specifically probe dynamic form processing, we created a novel stimulus in which Gabor elements provided orientation cues that were either congruent or incongruent with information provided by spatial position cues. Importantly, when these cues were put into conflict, we discovered a competitive trade-off in the contribution of position and orientation to perception. This effect appeared to depend strongly on the reliability of visual processing specialized for analyzing the low-level cues. For instance, we found that as spatial frequency increased, orientation cues contributed significantly more to perception of dynamic objects, and that as Gabor size decreased position cues contributed significantly more to perception.</p>
<p>Interestingly, in Experiment 1 we found that random orientation cues yielded minimal impact on discrimination performance, indicating that observers could discount noisy orientation cues that were incompatible with position cues to build a coherent global representation of object shape. At first glance, this result appears to contrast with that of a recent study showing that random orientation cues could impair performance in a task discriminating intact walkers from phase-scrambled walkers (Poljac et al.,
<xref rid="B49" ref-type="bibr">2011</xref>
). We attribute the difference in results primarily to the nature of our task, which was designed to directly assess the perceptual quality and appearance of walking direction as a function of changing orientation cues. In this regard, our approach is similar to that of Day and Loffler (
<xref rid="B18" ref-type="bibr">2009</xref>
), who pointed out that mechanisms for determining the perceptual appearance of an object may differ from mechanisms involved in fine discriminations based on object shape (Loffler,
<xref rid="B42" ref-type="bibr">2008</xref>
). The key importance of random orientation in the current study was to show that simply violating the consistency between element position and orientation with respect to the global shape (e.g., collinearity) could not explain the significant decrement in performance for the condition with incongruent orientation cues. Hence, the present findings provide strong evidence that changes in performance due to incongruent orientation were caused by genuine reversals in the perceptual appearance of global dynamic shape.</p>
<p>These results contribute to growing evidence that orientation provides a useful and important cue for retrieving information about global structure (Hess and Hayes,
<xref rid="B26" ref-type="bibr">1994</xref>
; Levi and Klein,
<xref rid="B38" ref-type="bibr">2000</xref>
; for a review see Loffler,
<xref rid="B42" ref-type="bibr">2008</xref>
), and extends these findings to perception of dynamic objects (Poljac et al.,
<xref rid="B49" ref-type="bibr">2011</xref>
; Thurman and Lu,
<xref rid="B59" ref-type="bibr">2013a</xref>
). An important methodological feature of our study was to put position and orientation cues into conflict in order to measure the relative contributions of each type of cue to perception. In fact, Day and Loffler (
<xref rid="B18" ref-type="bibr">2009</xref>
) recently used this approach to study static shape perception by positioning Gabor elements around the edge of a circle and making orientation consistent with the shape of a pentagon. Their results are comparable to ours, showing that orientation was more likely to “capture,” or override, position cues when spatial frequency or Gabor size increased, or when the number of elements in the display decreased. In their discussion, Day and Loffler (
<xref rid="B18" ref-type="bibr">2009</xref>
) reach the same conclusion that we do in the current study, arguing for a global shape processing mechanism that implements weighted cue combination according to sensory cue reliability. Our study goes one step further in developing a Bayesian model of global form analysis and explaining individual subject data using empirical estimates of cue reliability from two low-level discrimination tasks.</p>
<p>Another important aspect of our experimental design was to compare performance between two fundamentally different types of dynamic form. The first task required discrimination of walking direction for biological motion stimuli with semi-rigid form and a complex articulating style of motion. The second task required discrimination of the rotation direction of a rigid square shape with a simpler, rigid style of rotational motion. Despite these differences in the complexity of form and motion information, as well as differences in the biological nature of the stimuli, we found that the pattern of performance in Experiments 1 and 2 was nearly identical between these two tasks and stimulus types. Since the stimuli were designed specifically to exclude local image motion information and to target processes of form analysis, these data suggest that perception of biological and non-biological stimuli on the basis of form cues is likely supported by a common, or generic, computational mechanism. That is, if specialized mechanisms did contribute to biological motion processing, then we would have expected some difference in the pattern of performance across the many variables that we manipulated in the experiments. It is important to note that these findings do not preclude the possibility of specialized mechanisms for biological motion based on characteristic, low-level motion features (Troje and Westhoff,
<xref rid="B62" ref-type="bibr">2006</xref>
; Chang and Troje,
<xref rid="B15" ref-type="bibr">2010</xref>
; Van Boxtel and Lu,
<xref rid="B65" ref-type="bibr">2012</xref>
; Thurman and Lu,
<xref rid="B60" ref-type="bibr">2013b</xref>
), but they do suggest some independence among processes related to form and motion analysis.</p>
<p>To help explain our behavioral data and to formally test the hypothesis that dynamic form analysis relies on integration according to cue reliability, we developed a Bayesian model that could be applied generically to both biological and non-biological stimuli. The computational principles of the model were inspired by form-based, template-matching approaches to biological motion (Lange and Lappe,
<xref rid="B36" ref-type="bibr">2006</xref>
; Theusner et al.,
<xref rid="B56" ref-type="bibr">2014</xref>
), and by Bayesian models of optimal, or rational, sensory cue integration (for review see Yuille and Bulthoff,
<xref rid="B70" ref-type="bibr">1996</xref>
). The model was designed with two assumptions in mind. First, there appear to be two processing pathways for computing global form from displays with sparse local elements. The first pathway assigns position labels to elements in the display and ignores other features, such as element orientation (Levi et al.,
<xref rid="B40" ref-type="bibr">1997</xref>
). Global form perception may be achieved through template matching using internal representations of object shape, on the basis of position cues alone. The second pathway utilizes “snake cues” that are provided by element orientation, and thus considers both orientation and relative position in computing global form (Hess and Hayes,
<xref rid="B26" ref-type="bibr">1994</xref>
; Levi and Klein,
<xref rid="B38" ref-type="bibr">2000</xref>
; Day and Loffler,
<xref rid="B18" ref-type="bibr">2009</xref>
). In order to integrate information from these pathways, a secondary mechanism likely exists to combine their outputs and to produce a single decision on the perceptual appearance of global dynamic form. Our results strongly suggest an integration mechanism that rationally accounts for the relative reliabilities of low-level position and orientation cues; however, it is beyond the scope of the current study to understand exactly how this processing may be implemented in the neural system. These results extend findings from previous studies demonstrating the Bayesian nature of sensory cue integration in other domains of visual processing (Landy et al.,
<xref rid="B34" ref-type="bibr">1995</xref>
; Liu et al.,
<xref rid="B41" ref-type="bibr">1995</xref>
; Weiss and Adelson,
<xref rid="B69" ref-type="bibr">1998</xref>
; Feldman,
<xref rid="B20" ref-type="bibr">2001</xref>
), and multisensory processing (Ernst and Banks,
<xref rid="B19" ref-type="bibr">2002</xref>
; Alais and Burr,
<xref rid="B4" ref-type="bibr">2004</xref>
).</p>
<p>Form analysis has been recognized as a key component in biological motion perception (Sumi,
<xref rid="B55" ref-type="bibr">1984</xref>
; Pinto and Shiffrar,
<xref rid="B48" ref-type="bibr">1999</xref>
; Beintema and Lappe,
<xref rid="B10" ref-type="bibr">2002</xref>
; Lange and Lappe,
<xref rid="B36" ref-type="bibr">2006</xref>
; Lu and Liu,
<xref rid="B44" ref-type="bibr">2006</xref>
; Lu,
<xref rid="B43" ref-type="bibr">2010</xref>
). Our study provided psychophysical and computational evidence that a generic mechanism of dynamic form analysis can explain perception of both biological and non-biological forms. Within this framework, position, and orientation cues each make independent contributions to the template-matching process and compete to determine the global percept when position and orientation provide conflicting information. The outputs of each computational pathway are later integrated by a mechanism that follows rational Bayesian rules of sensory cue integration, taking into account the relative reliabilities of the low-level cues. Importantly, we found that independent estimates of low-level cue reliability were effective in accurately predicting individual subject performance in two high-level dynamic form discrimination tasks. The good fit between the human behavior and model predictions in the current study provide compelling support for the notion of the “Bayesian brain” (Knill and Pouget,
<xref rid="B33" ref-type="bibr">2004</xref>
), demonstrating that the visual system uses Bayesian inference in the processing of dynamic form information.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>This research was supported by NSF grant BCS-0843880 to Hongjing Lu. We thank Keith Holyoak for comments on an earlier version of the manuscript.</p>
</ack>
<sec sec-type="supplementary material" id="s3">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at:
<ext-link ext-link-type="uri" xlink:href="http://www.frontiersin.org/journal/10.3389/fnhum.2014.00091/abstract">http://www.frontiersin.org/journal/10.3389/fnhum.2014.00091/abstract</ext-link>
</p>
<supplementary-material content-type="local-data" id="SM1">
<media xlink:href="Movie1.MPG">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM2">
<media xlink:href="Movie2.MPG">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Aggarwal</surname>
<given-names>J. K.</given-names>
</name>
<name>
<surname>Cai</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Sabata</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Nonrigid motion analysis: articulated and elastic motion</article-title>
.
<source>Comput. Vis. Image Underst</source>
.
<volume>70</volume>
,
<fpage>142</fpage>
<lpage>156</lpage>
<pub-id pub-id-type="doi">10.1006/cviu.1997.0620</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Aggarwal</surname>
<given-names>J. K.</given-names>
</name>
<name>
<surname>Nandhakumar</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<article-title>On the computation of motion from sequences of images-a review</article-title>
.
<source>Proc. IEEE</source>
<volume>69</volume>
,
<fpage>917</fpage>
<lpage>934</lpage>
<pub-id pub-id-type="doi">10.1109/5.5965</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The ventriloquist effect results from near-optimal bimodal integration</article-title>
.
<source>Curr. Biol</source>
.
<volume>14</volume>
,
<fpage>257</fpage>
<lpage>262</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2004.01.029</pub-id>
<pub-id pub-id-type="pmid">14761661</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amano</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Edwards</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Badcock</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Adaptive pooling of visual motion signals by the human visual system revealed with a novel multi-element stimulus</article-title>
.
<source>J. Vis</source>
.
<volume>9</volume>
,
<fpage>4</fpage>
1–4.25.
<pub-id pub-id-type="doi">10.1167/9.3.4</pub-id>
<pub-id pub-id-type="pmid">19757943</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Atkinson</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Tunstall</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Dittrich</surname>
<given-names>W. H.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Evidence for distinct contributions of form and motion information to the recognition of emotions from body gestures</article-title>
.
<source>Cognition</source>
<volume>104</volume>
,
<fpage>59</fpage>
<lpage>72</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2006.05.005</pub-id>
<pub-id pub-id-type="pmid">16831411</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bardi</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Regolin</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Simion</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Biological motion preference in humans at birth: role of dynamic and configural properties</article-title>
.
<source>Dev. Sci</source>
.
<volume>14</volume>
,
<fpage>353</fpage>
<lpage>359</lpage>
<pub-id pub-id-type="doi">10.1111/j.1467-7687.2010.00985.x</pub-id>
<pub-id pub-id-type="pmid">22213905</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beintema</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Georg</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Lappe</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Perception of biological motion from limited-lifetime stimuli</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>68</volume>
,
<fpage>613</fpage>
<lpage>624</lpage>
<pub-id pub-id-type="doi">10.3758/BF03208763</pub-id>
<pub-id pub-id-type="pmid">16933426</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beintema</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Lappe</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Perception of biological motion without local image motion</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>99</volume>
,
<fpage>5661</fpage>
<lpage>5663</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.082483699</pub-id>
<pub-id pub-id-type="pmid">11960019</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blake</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Shiffrar</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Perception of human motion</article-title>
.
<source>Annu. Rev. Psychol</source>
.
<volume>58</volume>
,
<fpage>47</fpage>
<lpage>73</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.psych.57.102904.190152</pub-id>
<pub-id pub-id-type="pmid">16903802</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brainard</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>The psychophysics toolbox</article-title>
.
<source>Spat. Vis</source>
.
<volume>10</volume>
,
<fpage>433</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="doi">10.1163/156856897X00357</pub-id>
<pub-id pub-id-type="pmid">9176952</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burr</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Wijesundra</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Orientation discrimination depends on spatial frequency</article-title>
.
<source>Vision Res</source>
.
<volume>31</volume>
,
<fpage>1449</fpage>
<lpage>1452</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(91)90064-C</pub-id>
<pub-id pub-id-type="pmid">1891831</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Casile</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Giese</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Critical features for the recognition of biological motion</article-title>
.
<source>J. Vis</source>
.
<volume>5</volume>
,
<fpage>348</fpage>
<lpage>360</lpage>
<pub-id pub-id-type="doi">10.1167/5.4.6</pub-id>
<pub-id pub-id-type="pmid">15929657</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chang</surname>
<given-names>D. H. F.</given-names>
</name>
<name>
<surname>Troje</surname>
<given-names>N. F.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>The local inversion effect in biological motion perception is acceleration-based</article-title>
.
<source>J. Vis</source>
.
<volume>8</volume>
,
<fpage>911</fpage>
<pub-id pub-id-type="doi">10.1167/8.6.911</pub-id>
<pub-id pub-id-type="pmid">19271889</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dakin</surname>
<given-names>S. C.</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>C. B.</given-names>
</name>
<name>
<surname>Hess</surname>
<given-names>R. F.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>The interaction of first- and second-order cues to orientation</article-title>
.
<source>Vision Res</source>
.
<volume>39</volume>
,
<fpage>2867</fpage>
<lpage>2884</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(98)00307-1</pub-id>
<pub-id pub-id-type="pmid">10492816</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Daugman</surname>
<given-names>J. G.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters</article-title>
.
<source>J. Opt. Soc. Am. A</source>
<volume>2</volume>
,
<fpage>1160</fpage>
<lpage>1169</lpage>
<pub-id pub-id-type="doi">10.1364/JOSAA.2.001160</pub-id>
<pub-id pub-id-type="pmid">4020513</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Day</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Loffler</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The role of orientation and position in shape perception</article-title>
.
<source>J. Vis</source>
.
<volume>9</volume>
,
<fpage>14.1</fpage>
<lpage>14.17</lpage>
<pub-id pub-id-type="doi">10.1167/9.10.14</pub-id>
<pub-id pub-id-type="pmid">19810795</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Feldman</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Bayesian contour integration</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>63</volume>
,
<fpage>1171</fpage>
<lpage>1182</lpage>
<pub-id pub-id-type="doi">10.3758/BF03194532</pub-id>
<pub-id pub-id-type="pmid">11766942</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fox</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>McDaniel</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1982</year>
).
<article-title>The perception of biological motion by human infants</article-title>
.
<source>Science (New York, N.Y.)</source>
<volume>218</volume>
,
<fpage>486</fpage>
<lpage>487</lpage>
<pub-id pub-id-type="doi">10.1126/science.7123249</pub-id>
<pub-id pub-id-type="pmid">7123249</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Giese</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Poggio</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Neural mechanisms for the recognition of biological movements</article-title>
.
<source>Nat. Rev. Neurosci</source>
.
<volume>4</volume>
,
<fpage>179</fpage>
<lpage>192</lpage>
<pub-id pub-id-type="doi">10.1038/nrn1057</pub-id>
<pub-id pub-id-type="pmid">12612631</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goodale</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Milner</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Separate visual pathways for perception and action</article-title>
.
<source>Trends Neurosci</source>
.
<volume>15</volume>
,
<fpage>20</fpage>
<lpage>25</lpage>
<pub-id pub-id-type="doi">10.1016/0166-2236(92)90344-8</pub-id>
<pub-id pub-id-type="pmid">1374953</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grossman</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Donnelly</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Price</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Pickens</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Morgan</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Neighbor</surname>
<given-names>G.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2000</year>
).
<article-title>Brain areas involved in perception of biological motion</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>12</volume>
,
<fpage>711</fpage>
<lpage>720</lpage>
<pub-id pub-id-type="doi">10.1162/089892900562417</pub-id>
<pub-id pub-id-type="pmid">11054914</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grossman</surname>
<given-names>E. D.</given-names>
</name>
<name>
<surname>Jardine</surname>
<given-names>N. L.</given-names>
</name>
<name>
<surname>Pyles</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>fMR-adaptation reveals invariant coding of biological motion on the human STS</article-title>
.
<source>Front. Hum. Neurosci</source>
.
<volume>4</volume>
:
<fpage>15</fpage>
<pub-id pub-id-type="doi">10.3389/neuro.09.015.2010</pub-id>
<pub-id pub-id-type="pmid">20431723</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hess</surname>
<given-names>R. F.</given-names>
</name>
<name>
<surname>Hayes</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>The coding of spatial position by the human visual system: effects of spatial scale and retinal eccentricity</article-title>
.
<source>Vision Res</source>
.
<volume>34</volume>
,
<fpage>625</fpage>
<lpage>643</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(94)90018-3</pub-id>
<pub-id pub-id-type="pmid">8160382</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hess</surname>
<given-names>R. F.</given-names>
</name>
<name>
<surname>Holliday</surname>
<given-names>I. E.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>The coding of spatial position by the human visual system: effects of spatial scale and contrast</article-title>
.
<source>Vision Res</source>
.
<volume>32</volume>
,
<fpage>1085</fpage>
<lpage>1097</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(92)90009-8</pub-id>
<pub-id pub-id-type="pmid">1509699</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hiris</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Detection of biological and nonbiological motion</article-title>
.
<source>J. Vis</source>
.
<volume>7</volume>
,
<fpage>1</fpage>
<lpage>16</lpage>
Introduction
<pub-id pub-id-type="doi">10.1167/7.12.4</pub-id>
<pub-id pub-id-type="pmid">17997646</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hu</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Maybank</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>A survey on visual surveillance of object motion and behaviors</article-title>
.
<source>IEEE Trans. Syst. Man Cybern. C Appl. Rev</source>
.
<volume>34</volume>
,
<fpage>334</fpage>
<lpage>352</lpage>
<pub-id pub-id-type="doi">10.1109/TSMCC.2004.829274</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jastorff</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Orban</surname>
<given-names>G. A.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>fMRI reveals distinct processing of form and motion features in biological motion displays</article-title>
.
<source>J. Vis</source>
.
<volume>8</volume>
:
<fpage>676</fpage>
<pub-id pub-id-type="doi">10.1167/8.6.676</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jastorff</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Orban</surname>
<given-names>G. A.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Human functional magnetic resonance imaging reveals separation and integration of shape and motion cues in biological motion processing</article-title>
.
<source>J. Neurosci</source>
.
<volume>29</volume>
,
<fpage>7315</fpage>
<lpage>7329</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4870-08.2009</pub-id>
<pub-id pub-id-type="pmid">19494153</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jastorff</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Popivanov</surname>
<given-names>I. D.</given-names>
</name>
<name>
<surname>Vogels</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Vanduffel</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Orban</surname>
<given-names>G. A.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Integration of shape and motion cues in biological motion processing in the monkey STS</article-title>
.
<source>Neuroimage</source>
<volume>60</volume>
,
<fpage>911</fpage>
<lpage>921</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.12.087</pub-id>
<pub-id pub-id-type="pmid">22245356</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The Bayesian brain: the role of uncertainty in neural coding and computation</article-title>
.
<source>Trends Neurosci</source>
.
<volume>27</volume>
,
<fpage>712</fpage>
<lpage>719</lpage>
<pub-id pub-id-type="doi">10.1016/j.tins.2004.10.007</pub-id>
<pub-id pub-id-type="pmid">15541511</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Landy</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>L. T.</given-names>
</name>
<name>
<surname>Johnston</surname>
<given-names>E. B.</given-names>
</name>
<name>
<surname>Young</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Measurement and modeling of depth cue combination: in defense of weak fusion</article-title>
.
<source>Vision Res</source>
.
<volume>35</volume>
,
<fpage>389</fpage>
<lpage>412</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(94)00176-M</pub-id>
<pub-id pub-id-type="pmid">7892735</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lange</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Georg</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Lappe</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Visual perception of biological motion by form: a template-matching analysis</article-title>
.
<source>J. Vis</source>
.
<volume>6</volume>
,
<fpage>836</fpage>
<lpage>849</lpage>
<pub-id pub-id-type="doi">10.1167/6.8.6</pub-id>
<pub-id pub-id-type="pmid">16895462</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lange</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lappe</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>A model of biological motion perception from configural form cues</article-title>
.
<source>J. Neurosci</source>
.
<volume>26</volume>
,
<fpage>2894</fpage>
<lpage>2906</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4915-05.2006</pub-id>
<pub-id pub-id-type="pmid">16540566</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>A. L.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>A comparison of global motion perception using a multiple-aperture stimulus</article-title>
.
<source>J. Vis</source>
.
<volume>10</volume>
,
<fpage>1</fpage>
<lpage>9</lpage>
<pub-id pub-id-type="doi">10.1167/10.4.9</pub-id>
<pub-id pub-id-type="pmid">20465329</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levi</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Klein</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Seeing circles: what limits shape perception?</article-title>
<source>Vision Res</source>
.
<volume>40</volume>
,
<fpage>2329</fpage>
<lpage>2339</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(00)00092-4</pub-id>
<pub-id pub-id-type="pmid">10927118</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levi</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Klein</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Sharma</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Position jitter and undersampling in pattern perception</article-title>
.
<source>Vision Res</source>
.
<volume>39</volume>
,
<fpage>445</fpage>
<lpage>465</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(98)00125-4</pub-id>
<pub-id pub-id-type="pmid">10341976</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levi</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Sharma</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Klein</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Feature integration in pattern perception</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>94</volume>
,
<fpage>11742</fpage>
<lpage>11746</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.94.21.11742</pub-id>
<pub-id pub-id-type="pmid">9326681</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Knill</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Kersten</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Object classification for human and ideal observers</article-title>
.
<source>Vision Res</source>
.
<volume>35</volume>
,
<fpage>549</fpage>
<lpage>568</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(94)00150-K</pub-id>
<pub-id pub-id-type="pmid">7900295</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Loffler</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Perception of contours and shapes: low and intermediate stage mechanisms</article-title>
.
<source>Vision Res</source>
.
<volume>48</volume>
,
<fpage>2106</fpage>
<lpage>2127</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2008.03.006</pub-id>
<pub-id pub-id-type="pmid">18502467</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lu</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Structural processing in biological motion perception</article-title>
.
<source>J. Vis</source>
.
<volume>10</volume>
:
<fpage>13</fpage>
<pub-id pub-id-type="doi">10.1167/10.12.13</pub-id>
<pub-id pub-id-type="pmid">21047745</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Computing dynamic classification images from correlation maps</article-title>
.
<source>J. Vis</source>
.
<volume>6</volume>
,
<fpage>475</fpage>
<lpage>483</lpage>
<pub-id pub-id-type="doi">10.1167/6.4.12</pub-id>
<pub-id pub-id-type="pmid">16889481</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Neri</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Morrone</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Seeing biological motion</article-title>
.
<source>Nature</source>
<volume>395</volume>
,
<fpage>894</fpage>
<lpage>896</lpage>
<pub-id pub-id-type="doi">10.1038/27661</pub-id>
<pub-id pub-id-type="pmid">9804421</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oram</surname>
<given-names>M. W.</given-names>
</name>
<name>
<surname>Perrett</surname>
<given-names>D. I.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Responses of anterior superior temporal polysensory (STPa) neurons to “biological motion” stimuli</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>6</volume>
,
<fpage>99</fpage>
<lpage>116</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.1994.6.2.99</pub-id>
<pub-id pub-id-type="pmid">23962364</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pelli</surname>
<given-names>D. G.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>The VideoToolbox software for visual psychophysics: transforming numbers into movies</article-title>
.
<source>Spat. Vis</source>
.
<volume>10</volume>
,
<fpage>437</fpage>
<lpage>442</lpage>
<pub-id pub-id-type="doi">10.1163/156856897X00366</pub-id>
<pub-id pub-id-type="pmid">9176953</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pinto</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Shiffrar</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Subconfigurations of the human form in the perception of biological motion displays</article-title>
.
<source>Acta Psychol</source>
.
<volume>102</volume>
,
<fpage>293</fpage>
<lpage>318</lpage>
<pub-id pub-id-type="doi">10.1016/S0001-6918(99)00028-1</pub-id>
<pub-id pub-id-type="pmid">10504885</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Poljac</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Verfaillie</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Wagemans</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Integrating biological motion: the role of grouping in the perception of point-light actions</article-title>
.
<source>PLoS ONE</source>
<volume>6</volume>
:
<fpage>e25867</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0025867</pub-id>
<pub-id pub-id-type="pmid">21991376</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Puce</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Perrett</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Electrophysiology and brain imaging of biological motion</article-title>
.
<source>Philos. Trans. R. Soc. Lond. B Biol. Sci</source>
.
<volume>358</volume>
,
<fpage>435</fpage>
<lpage>445</lpage>
<pub-id pub-id-type="doi">10.1098/rstb.2002.1221</pub-id>
<pub-id pub-id-type="pmid">12689371</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Regolin</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Tommasi</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Vallortigara</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Visual perception of biological motion in newly hatched chicks as revealed by an imprinting procedure</article-title>
.
<source>Anim. Cogn</source>
.
<volume>3</volume>
,
<fpage>53</fpage>
<lpage>60</lpage>
<pub-id pub-id-type="doi">10.1007/s100710050050</pub-id>
<pub-id pub-id-type="pmid">18941808</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Saygin</surname>
<given-names>A. P.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Superior temporal and premotor brain areas necessary for biological motion perception</article-title>
.
<source>Brain</source>
<volume>130(Pt 9)</volume>
,
<fpage>2452</fpage>
<lpage>2461</lpage>
<pub-id pub-id-type="doi">10.1093/brain/awm162</pub-id>
<pub-id pub-id-type="pmid">17660183</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Simion</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Regolin</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Bulf</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>A predisposition for biological motion in the newborn baby</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>105</volume>
,
<fpage>809</fpage>
<lpage>813</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.0707021105</pub-id>
<pub-id pub-id-type="pmid">18174333</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Singer</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Sheinberg</surname>
<given-names>D. L.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Temporal cortex neurons encode articulated actions as slow sequences of integrated poses</article-title>
.
<source>J. Neurosci</source>
.
<volume>30</volume>
,
<fpage>3133</fpage>
<lpage>3145</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3211-09.2010</pub-id>
<pub-id pub-id-type="pmid">20181610</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sumi</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1984</year>
).
<article-title>Upside-down presentation of the Johansson moving light-spot pattern</article-title>
.
<source>Perception</source>
<volume>13</volume>
,
<fpage>283</fpage>
<lpage>286</lpage>
<pub-id pub-id-type="doi">10.1068/p130283</pub-id>
<pub-id pub-id-type="pmid">6514513</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Theusner</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>de Lussanet</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lappe</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Action recognition by motion detection in posture space</article-title>
.
<source>J. Neurosci</source>
.
<volume>34</volume>
,
<fpage>909</fpage>
<lpage>921</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2900-13.2014</pub-id>
<pub-id pub-id-type="pmid">24431449</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thompson</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Baccus</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Form and motion make independent contributions to the response to biological motion in occipitotemporal cortex</article-title>
.
<source>Neuroimage</source>
<volume>59</volume>
,
<fpage>625</fpage>
<lpage>634</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.07.051</pub-id>
<pub-id pub-id-type="pmid">21839175</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thurman</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Giese</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Grossman</surname>
<given-names>E. D.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Perceptual and computational analysis of critical features for biological motion</article-title>
.
<source>J. Vis</source>
.
<volume>10</volume>
,
<fpage>15</fpage>
<pub-id pub-id-type="doi">10.1167/10.12.15</pub-id>
<pub-id pub-id-type="pmid">21047747</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thurman</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Grossman</surname>
<given-names>E. D.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Temporal “Bubbles” reveal key features for point-light biological motion perception</article-title>
.
<source>J. Vis</source>
.
<volume>8</volume>
,
<fpage>28.1</fpage>
<lpage>28.11</lpage>
<pub-id pub-id-type="doi">10.1167/8.3.28</pub-id>
<pub-id pub-id-type="pmid">18484834</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thurman</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2013a</year>
).
<article-title>Complex interactions between spatial, orientation, and motion cues for biological motion perception across visual space</article-title>
.
<source>J. Vis</source>
.
<volume>13</volume>
:
<fpage>8</fpage>
<pub-id pub-id-type="doi">10.1167/13.2.8</pub-id>
<pub-id pub-id-type="pmid">23390322</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thurman</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2013b</year>
).
<article-title>Physical and biological constraints govern perceived animacy of scrambled human forms</article-title>
.
<source>Psychol. Sci</source>
.
<volume>24</volume>
,
<fpage>1133</fpage>
<lpage>1141</lpage>
<pub-id pub-id-type="doi">10.1177/0956797612467212</pub-id>
<pub-id pub-id-type="pmid">23670885</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Toet</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Koenderink</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<article-title>Differential spatial displacement discrimination thresholds for Gabor patches</article-title>
.
<source>Vision Res</source>
.
<volume>28</volume>
,
<fpage>133</fpage>
<lpage>143</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(88)80013-0</pub-id>
<pub-id pub-id-type="pmid">3413990</pub-id>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Troje</surname>
<given-names>N. F.</given-names>
</name>
<name>
<surname>Westhoff</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>The inversion effect in biological motion perception: evidence for a “life detector”?</article-title>
<source>Curr. Biol</source>
.
<volume>16</volume>
,
<fpage>821</fpage>
<lpage>824</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2006.03.022</pub-id>
<pub-id pub-id-type="pmid">16631591</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ungerleider</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Mishkin</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1982</year>
).
<article-title>Two cortical visual systems</article-title>
, in
<source>Analysis of Visual Behavior</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Ingle</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mansfield</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
),
<fpage>549</fpage>
<lpage>586</lpage>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vallortigara</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Regolin</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Marconato</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Visually inexperienced chicks exhibit spontaneous preference for biological motion patterns</article-title>
.
<source>PLoS Biol</source>
.
<volume>3</volume>
:
<fpage>e208</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pbio.0030208</pub-id>
<pub-id pub-id-type="pmid">15934787</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Boxtel</surname>
<given-names>J. J. A.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Signature movements lead to efficient search for threatening actions</article-title>
.
<source>PLoS ONE</source>
<volume>7</volume>
:
<fpage>e37085</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0037085</pub-id>
<pub-id pub-id-type="pmid">22649510</pub-id>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Boxtel</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>A biological motion toolbox for reading, displaying, and manipulating motion capture data in research settings</article-title>
.
<source>J. Vis</source>
.
<volume>13</volume>
,
<fpage>1</fpage>
<lpage>7</lpage>
<pub-id pub-id-type="doi">10.1167/13.12.7</pub-id>
<pub-id pub-id-type="pmid">24130256</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vangeneugden</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>De Mazière</surname>
<given-names>P. A.</given-names>
</name>
<name>
<surname>Van Hulle</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Jaeggli</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Van Gool</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Vogels</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Distinct mechanisms for coding of visual actions in macaque temporal cortex</article-title>
.
<source>J. Neurosci</source>
.
<volume>31</volume>
,
<fpage>385</fpage>
<lpage>401</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2703-10.2011</pub-id>
<pub-id pub-id-type="pmid">21228150</pub-id>
</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vangeneugden</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Pollick</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Vogels</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Functional differentiation of macaque visual temporal cortical neurons using a parametric action space</article-title>
.
<source>Cereb. Cortex</source>
<volume>19</volume>
,
<fpage>593</fpage>
<lpage>611</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhn109</pub-id>
<pub-id pub-id-type="pmid">18632741</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Webb</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Aggarwal</surname>
<given-names>J. K.</given-names>
</name>
</person-group>
(
<year>1982</year>
).
<article-title>Structure from motion of rigid and jointed objects</article-title>
.
<source>Artif. Intell</source>
.
<volume>19</volume>
,
<fpage>107</fpage>
<lpage>130</lpage>
<pub-id pub-id-type="doi">10.1016/0004-3702(82)90023-6</pub-id>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Weiss</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Adelson</surname>
<given-names>E. H.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Slow and Smooth: a Bayesian theory for the combination of local motion signals in human vision</article-title>
, in
<source>Center for Biological and Computational Learning Paper</source>
,
<volume>Vol. 158</volume>
, A.I. Memo 1624 (
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT; Department of Brain and Cognitive Sciences</publisher-name>
),
<fpage>1</fpage>
<lpage>42</lpage>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Yuille</surname>
<given-names>A. L.</given-names>
</name>
<name>
<surname>Bulthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Bayesian theory and psychophysics</article-title>
, in
<source>Perception as Bayesian Inference</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Knill</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Richards</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, UK</publisher-loc>
:
<publisher-name>Cambridge University Press</publisher-name>
),
<fpage>123</fpage>
<lpage>161</lpage>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001D51 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 001D51 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3932410
   |texte=   Bayesian integration of position and orientation cues in perception of biological and non-biological forms
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:24605096" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024