Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Scene analysis in the natural environment

Identifieur interne : 002718 ( Pmc/Curation ); précédent : 002717; suivant : 002719

Scene analysis in the natural environment

Auteurs : Michael S. Lewicki [États-Unis] ; Bruno A. Olshausen [États-Unis] ; Annemarie Surlykke [Danemark] ; Cynthia F. Moss [États-Unis]

Source :

RBID : PMC:3978336

Abstract

The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals.


Url:
DOI: 10.3389/fpsyg.2014.00199
PubMed: 24744740
PubMed Central: 3978336

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3978336

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Scene analysis in the natural environment</title>
<author>
<name sortKey="Lewicki, Michael S" sort="Lewicki, Michael S" uniqKey="Lewicki M" first="Michael S." last="Lewicki">Michael S. Lewicki</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Electrical Engineering and Computer Science, Case Western Reserve University</institution>
<country>Cleveland, OH, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Olshausen, Bruno A" sort="Olshausen, Bruno A" uniqKey="Olshausen B" first="Bruno A." last="Olshausen">Bruno A. Olshausen</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Helen Wills Neuroscience Institute, School of Optometry, Redwood Center for Theoretical Neuroscience, University of California at Berkeley</institution>
<country>Berkeley, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Surlykke, Annemarie" sort="Surlykke, Annemarie" uniqKey="Surlykke A" first="Annemarie" last="Surlykke">Annemarie Surlykke</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Biology, University of Southern Denmark</institution>
<country>Odense, Denmark</country>
</nlm:aff>
<country xml:lang="fr">Danemark</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Moss, Cynthia F" sort="Moss, Cynthia F" uniqKey="Moss C" first="Cynthia F." last="Moss">Cynthia F. Moss</name>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Department of Psychology and Institute for Systems Research, University of Maryland</institution>
<country>College Park, MD, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24744740</idno>
<idno type="pmc">3978336</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3978336</idno>
<idno type="RBID">PMC:3978336</idno>
<idno type="doi">10.3389/fpsyg.2014.00199</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">002718</idno>
<idno type="wicri:Area/Pmc/Curation">002718</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Scene analysis in the natural environment</title>
<author>
<name sortKey="Lewicki, Michael S" sort="Lewicki, Michael S" uniqKey="Lewicki M" first="Michael S." last="Lewicki">Michael S. Lewicki</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Electrical Engineering and Computer Science, Case Western Reserve University</institution>
<country>Cleveland, OH, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Olshausen, Bruno A" sort="Olshausen, Bruno A" uniqKey="Olshausen B" first="Bruno A." last="Olshausen">Bruno A. Olshausen</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Helen Wills Neuroscience Institute, School of Optometry, Redwood Center for Theoretical Neuroscience, University of California at Berkeley</institution>
<country>Berkeley, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Surlykke, Annemarie" sort="Surlykke, Annemarie" uniqKey="Surlykke A" first="Annemarie" last="Surlykke">Annemarie Surlykke</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Biology, University of Southern Denmark</institution>
<country>Odense, Denmark</country>
</nlm:aff>
<country xml:lang="fr">Danemark</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Moss, Cynthia F" sort="Moss, Cynthia F" uniqKey="Moss C" first="Cynthia F." last="Moss">Cynthia F. Moss</name>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Department of Psychology and Institute for Systems Research, University of Maryland</institution>
<country>College Park, MD, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Amrhein, V" uniqKey="Amrhein V">V. Amrhein</name>
</author>
<author>
<name sortKey="Kunc, H P" uniqKey="Kunc H">H. P. Kunc</name>
</author>
<author>
<name sortKey="Naguib, M" uniqKey="Naguib M">M. Naguib</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Appeltants, D" uniqKey="Appeltants D">D. Appeltants</name>
</author>
<author>
<name sortKey="Gentner, T" uniqKey="Gentner T">T. Gentner</name>
</author>
<author>
<name sortKey="Hulse, S" uniqKey="Hulse S">S. Hulse</name>
</author>
<author>
<name sortKey="Balthazart, J" uniqKey="Balthazart J">J. Balthazart</name>
</author>
<author>
<name sortKey="Ball, G" uniqKey="Ball G">G. Ball</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aubin, T" uniqKey="Aubin T">T. Aubin</name>
</author>
<author>
<name sortKey="Jouventin, P" uniqKey="Jouventin P">P. Jouventin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bacelo, J" uniqKey="Bacelo J">J. Bacelo</name>
</author>
<author>
<name sortKey="Engelmann, J" uniqKey="Engelmann J">J. Engelmann</name>
</author>
<author>
<name sortKey="Hollmann, M" uniqKey="Hollmann M">M. Hollmann</name>
</author>
<author>
<name sortKey="Von Der Emde, G" uniqKey="Von Der Emde G">G. von der Emde</name>
</author>
<author>
<name sortKey="Grant, K" uniqKey="Grant K">K. Grant</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ball, G" uniqKey="Ball G">G. Ball</name>
</author>
<author>
<name sortKey="Hulse, S" uniqKey="Hulse S">S. Hulse</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ballard, D H" uniqKey="Ballard D">D. H. Ballard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barrow, H" uniqKey="Barrow H">H. Barrow</name>
</author>
<author>
<name sortKey="Tenenbaum, J" uniqKey="Tenenbaum J">J. Tenenbaum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bates, M E" uniqKey="Bates M">M. E. Bates</name>
</author>
<author>
<name sortKey="Simmons, J A" uniqKey="Simmons J">J. A. Simmons</name>
</author>
<author>
<name sortKey="Zorikov, T V" uniqKey="Zorikov T">T. V. Zorikov</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bates, M E" uniqKey="Bates M">M. E. Bates</name>
</author>
<author>
<name sortKey="Stamper, S A" uniqKey="Stamper S">S. A. Stamper</name>
</author>
<author>
<name sortKey="Simmons, J A" uniqKey="Simmons J">J. A. Simmons</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bee, M" uniqKey="Bee M">M. Bee</name>
</author>
<author>
<name sortKey="Klump, G" uniqKey="Klump G">G. Klump</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bee, M A" uniqKey="Bee M">M. A. Bee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bee, M A" uniqKey="Bee M">M. A. Bee</name>
</author>
<author>
<name sortKey="Micheyl, C" uniqKey="Micheyl C">C. Micheyl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bell, A" uniqKey="Bell A">A. Bell</name>
</author>
<author>
<name sortKey="Sejnowski, T" uniqKey="Sejnowski T">T. Sejnowski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benney, K S" uniqKey="Benney K">K. S. Benney</name>
</author>
<author>
<name sortKey="Braaten, R F" uniqKey="Braaten R">R. F. Braaten</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bergen, J R" uniqKey="Bergen J">J. R. Bergen</name>
</author>
<author>
<name sortKey="Julesz, B" uniqKey="Julesz B">B. Julesz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Biederman, I" uniqKey="Biederman I">I. Biederman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blanz, V" uniqKey="Blanz V">V. Blanz</name>
</author>
<author>
<name sortKey="Vetter, T" uniqKey="Vetter T">T. Vetter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blauert, J" uniqKey="Blauert J">J. Blauert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Braaten, R F" uniqKey="Braaten R">R. F. Braaten</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Braaten, R F" uniqKey="Braaten R">R. F. Braaten</name>
</author>
<author>
<name sortKey="Leary, J C" uniqKey="Leary J">J. C. Leary</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bregman, A S" uniqKey="Bregman A">A. S. Bregman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bremond, J" uniqKey="Bremond J">J. Brémond</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bremond, J" uniqKey="Bremond J">J. Brémond</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brumm, H" uniqKey="Brumm H">H. Brumm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brumm, H" uniqKey="Brumm H">H. Brumm</name>
</author>
<author>
<name sortKey="Naguib, M" uniqKey="Naguib M">M. Naguib</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burgess, N" uniqKey="Burgess N">N. Burgess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Byrne, P" uniqKey="Byrne P">P. Byrne</name>
</author>
<author>
<name sortKey="Becker, S" uniqKey="Becker S">S. Becker</name>
</author>
<author>
<name sortKey="Burgess, N" uniqKey="Burgess N">N. Burgess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carlson, B" uniqKey="Carlson B">B. Carlson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carlson, B A" uniqKey="Carlson B">B. A. Carlson</name>
</author>
<author>
<name sortKey="Hopkins, C D" uniqKey="Hopkins C">C. D. Hopkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cashman, T J" uniqKey="Cashman T">T. J. Cashman</name>
</author>
<author>
<name sortKey="Fitzgibbon, A W" uniqKey="Fitzgibbon A">A. W. Fitzgibbon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Catchpole, C K" uniqKey="Catchpole C">C. K Catchpole</name>
</author>
<author>
<name sortKey="Slater, P J B" uniqKey="Slater P">P. J. B. Slater</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cherry, E C" uniqKey="Cherry E">E. C. Cherry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chiu, C" uniqKey="Chiu C">C. Chiu</name>
</author>
<author>
<name sortKey="Xian, W" uniqKey="Xian W">W. Xian</name>
</author>
<author>
<name sortKey="Moss, C F" uniqKey="Moss C">C. F. Moss</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clark, D L" uniqKey="Clark D">D. L. Clark</name>
</author>
<author>
<name sortKey="Uetz, G W" uniqKey="Uetz G">G. W. Uetz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Colby, C" uniqKey="Colby C">C. Colby</name>
</author>
<author>
<name sortKey="Goldberg, M" uniqKey="Goldberg M">M. Goldberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cooke, M" uniqKey="Cooke M">M. Cooke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cooke, M" uniqKey="Cooke M">M. Cooke</name>
</author>
<author>
<name sortKey="Hershey, J R" uniqKey="Hershey J">J. R. Hershey</name>
</author>
<author>
<name sortKey="Rennie, S J" uniqKey="Rennie S">S. J. Rennie</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cutting, J E" uniqKey="Cutting J">J. E. Cutting</name>
</author>
<author>
<name sortKey="Vishton, P M" uniqKey="Vishton P">P. M. Vishton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Darwin, C J" uniqKey="Darwin C">C. J. Darwin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Davis, K" uniqKey="Davis K">K. Davis</name>
</author>
<author>
<name sortKey="Biddulph, R" uniqKey="Biddulph R">R. Biddulph</name>
</author>
<author>
<name sortKey="Balashek, S" uniqKey="Balashek S">S. Balashek</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dent, M L" uniqKey="Dent M">M. L. Dent</name>
</author>
<author>
<name sortKey="Dooling, R J" uniqKey="Dooling R">R. J. Dooling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dent, M L" uniqKey="Dent M">M. L. Dent</name>
</author>
<author>
<name sortKey="Mcclaine, E M" uniqKey="Mcclaine E">E. M. McClaine</name>
</author>
<author>
<name sortKey="Best, V" uniqKey="Best V">V. Best</name>
</author>
<author>
<name sortKey="Ozmeral, E" uniqKey="Ozmeral E">E. Ozmeral</name>
</author>
<author>
<name sortKey="Narayan, R" uniqKey="Narayan R">R. Narayan</name>
</author>
<author>
<name sortKey="Gallun, F J" uniqKey="Gallun F">F. J. Gallun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Drees, O" uniqKey="Drees O">O. Drees</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Edelman, S" uniqKey="Edelman S">S. Edelman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elder, J H" uniqKey="Elder J">J. H. Elder</name>
</author>
<author>
<name sortKey="Goldberg, R M" uniqKey="Goldberg R">R. M. Goldberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Epstein, R A" uniqKey="Epstein R">R. A. Epstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Falk, B" uniqKey="Falk B">B. Falk</name>
</author>
<author>
<name sortKey="Williams, T" uniqKey="Williams T">T. Williams</name>
</author>
<author>
<name sortKey="Aytekin, M" uniqKey="Aytekin M">M. Aytekin</name>
</author>
<author>
<name sortKey="Moss, C F" uniqKey="Moss C">C. F. Moss</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Feng, A S" uniqKey="Feng A">A. S. Feng</name>
</author>
<author>
<name sortKey="Schul, J" uniqKey="Schul J">J. Schul</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Field, G D" uniqKey="Field G">G. D. Field</name>
</author>
<author>
<name sortKey="Chichilnisky, E J" uniqKey="Chichilnisky E">E. J. Chichilnisky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frisby, J P" uniqKey="Frisby J">J. P. Frisby</name>
</author>
<author>
<name sortKey="Stone, J V" uniqKey="Stone J">J. V. Stone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gao, L" uniqKey="Gao L">L. Gao</name>
</author>
<author>
<name sortKey="Balakrishnan, S" uniqKey="Balakrishnan S">S. Balakrishnan</name>
</author>
<author>
<name sortKey="He, W" uniqKey="He W">W. He</name>
</author>
<author>
<name sortKey="Yan, Z" uniqKey="Yan Z">Z Yan</name>
</author>
<author>
<name sortKey="Muller, R" uniqKey="Muller R">R. Müller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geipel, I" uniqKey="Geipel I">I. Geipel</name>
</author>
<author>
<name sortKey="Jung, K" uniqKey="Jung K">K Jung</name>
</author>
<author>
<name sortKey="Kalko, E K V" uniqKey="Kalko E">E. K. V. Kalko</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geisler, W S" uniqKey="Geisler W">W. S. Geisler</name>
</author>
<author>
<name sortKey="Perry, J S" uniqKey="Perry J">J. S. Perry</name>
</author>
<author>
<name sortKey="Super, B J" uniqKey="Super B">B. J. Super</name>
</author>
<author>
<name sortKey="Gallogly, D P" uniqKey="Gallogly D">D. P. Gallogly</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gerhardt, H C" uniqKey="Gerhardt H">H. C. Gerhardt</name>
</author>
<author>
<name sortKey="Bee, M A" uniqKey="Bee M">M. A. Bee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ghose, K" uniqKey="Ghose K">K. Ghose</name>
</author>
<author>
<name sortKey="Moss, C F" uniqKey="Moss C">C. F. Moss</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibson, J J" uniqKey="Gibson J">J. J. Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibson, J J" uniqKey="Gibson J">J. J. Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibson, J J" uniqKey="Gibson J">J. J. Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibson, J J" uniqKey="Gibson J">J. J. Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gold, B" uniqKey="Gold B">B. Gold</name>
</author>
<author>
<name sortKey="Morgan, N" uniqKey="Morgan N">N. Morgan</name>
</author>
<author>
<name sortKey="Ellis, D" uniqKey="Ellis D">D. Ellis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gollisch, T" uniqKey="Gollisch T">T. Gollisch</name>
</author>
<author>
<name sortKey="Meister, M" uniqKey="Meister M">M. Meister</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gould, S" uniqKey="Gould S">S. Gould</name>
</author>
<author>
<name sortKey="Baumstarck, P" uniqKey="Baumstarck P">P. Baumstarck</name>
</author>
<author>
<name sortKey="Quigley, M" uniqKey="Quigley M">M. Quigley</name>
</author>
<author>
<name sortKey="Ng, A" uniqKey="Ng A">A. Ng</name>
</author>
<author>
<name sortKey="Koller, D" uniqKey="Koller D">D. Koller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Harris, L R" uniqKey="Harris L">L. R Harris</name>
</author>
<author>
<name sortKey="Jenkin, M R M" uniqKey="Jenkin M">M. R. M. Jenkin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hartley, R" uniqKey="Hartley R">R. Hartley</name>
</author>
<author>
<name sortKey="Zisserman, A" uniqKey="Zisserman A">A. Zisserman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heiligenberg, W" uniqKey="Heiligenberg W">W. Heiligenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Henderson, J M" uniqKey="Henderson J">J. M. Henderson</name>
</author>
<author>
<name sortKey="Hollingworth, A" uniqKey="Hollingworth A">A. Hollingworth</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hill, D" uniqKey="Hill D">D. Hill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hiryu, S" uniqKey="Hiryu S">S. Hiryu</name>
</author>
<author>
<name sortKey="Bates, M E" uniqKey="Bates M">M. E. Bates</name>
</author>
<author>
<name sortKey="Simmons, J A" uniqKey="Simmons J">J. A. Simmons</name>
</author>
<author>
<name sortKey="Riquimaroux, H" uniqKey="Riquimaroux H">H. Riquimaroux</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hoiem, D" uniqKey="Hoiem D">D. Hoiem</name>
</author>
<author>
<name sortKey="Savarese, S" uniqKey="Savarese S">S. Savarese</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hulse, S" uniqKey="Hulse S">S. Hulse</name>
</author>
<author>
<name sortKey="Macdougall Shackleton, S" uniqKey="Macdougall Shackleton S">S. MacDougall-Shackleton</name>
</author>
<author>
<name sortKey="Wisniewski, A" uniqKey="Wisniewski A">A. Wisniewski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hulse, S H" uniqKey="Hulse S">S. H. Hulse</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hyvarinen, A" uniqKey="Hyvarinen A">A. Hyvarinen</name>
</author>
<author>
<name sortKey="Karhunen, J" uniqKey="Karhunen J">J. Karhunen</name>
</author>
<author>
<name sortKey="Oja, E" uniqKey="Oja E">E. Oja</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jackson, R R" uniqKey="Jackson R">R. R. Jackson</name>
</author>
<author>
<name sortKey="Pollard, S D" uniqKey="Pollard S">S. D. Pollard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jakobsen, L" uniqKey="Jakobsen L">L. Jakobsen</name>
</author>
<author>
<name sortKey="Ratcliffe, J M" uniqKey="Ratcliffe J">J. M. Ratcliffe</name>
</author>
<author>
<name sortKey="Surlykke, A" uniqKey="Surlykke A">A. Surlykke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jung, K" uniqKey="Jung K">K. Jung</name>
</author>
<author>
<name sortKey="Kalko, E K V" uniqKey="Kalko E">E. K. V Kalko</name>
</author>
<author>
<name sortKey="Von Helversen, O" uniqKey="Von Helversen O">O. von Helversen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Karklin, Y" uniqKey="Karklin Y">Y. Karklin</name>
</author>
<author>
<name sortKey="Lewicki, M S" uniqKey="Lewicki M">M. S. Lewicki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D. Kersten</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P. Mamassian</name>
</author>
<author>
<name sortKey="Yuille, A" uniqKey="Yuille A">A. Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D. Kersten</name>
</author>
<author>
<name sortKey="Yuille, A" uniqKey="Yuille A">A. Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klump, G" uniqKey="Klump G">G. Klump</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klump, G M" uniqKey="Klump G">G. M. Klump</name>
</author>
<author>
<name sortKey="Larsen, O N" uniqKey="Larsen O">O. N. Larsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Land, M" uniqKey="Land M">M. Land</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Land, M" uniqKey="Land M">M. Land</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Land, M" uniqKey="Land M">M. Land</name>
</author>
<author>
<name sortKey="Furneaux, S" uniqKey="Furneaux S">S. Furneaux</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Land, M F" uniqKey="Land M">M. F. Land</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Land, M F" uniqKey="Land M">M. F. Land</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Land, M F" uniqKey="Land M">M. F. Land</name>
</author>
<author>
<name sortKey="Hayhoe, M" uniqKey="Hayhoe M">M. Hayhoe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Land, M F" uniqKey="Land M">M. F. Land</name>
</author>
<author>
<name sortKey="Tatler, B W" uniqKey="Tatler B">B. W. Tatler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lappe, M" uniqKey="Lappe M">M. Lappe</name>
</author>
<author>
<name sortKey="Bremmer, F" uniqKey="Bremmer F">F Bremmer</name>
</author>
<author>
<name sortKey="Van Den Berg, A V" uniqKey="Van Den Berg A">A. V. van den Berg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Larsen, O N" uniqKey="Larsen O">O. N. Larsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lederman, S J" uniqKey="Lederman S">S. J. Lederman</name>
</author>
<author>
<name sortKey="Klatzky, R L" uniqKey="Klatzky R">R. L. Klatzky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, S A" uniqKey="Lee S">S. A. Lee</name>
</author>
<author>
<name sortKey="Spelke, E S" uniqKey="Spelke E">E. S. Spelke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lissmann, H W" uniqKey="Lissmann H">H. W. Lissmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lowe, D G" uniqKey="Lowe D">D. G. Lowe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lowe, D G" uniqKey="Lowe D">D. G. Lowe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marler, P" uniqKey="Marler P">P. Marler</name>
</author>
<author>
<name sortKey="Slabbekoorn, H W" uniqKey="Slabbekoorn H">H. W. Slabbekoorn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marr, D" uniqKey="Marr D">D. Marr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martin, D" uniqKey="Martin D">D. Martin</name>
</author>
<author>
<name sortKey="Fowlkes, C" uniqKey="Fowlkes C">C. Fowlkes</name>
</author>
<author>
<name sortKey="Malik, J" uniqKey="Malik J">J. Malik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Masland, R H" uniqKey="Masland R">R. H. Masland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcdermott, J H" uniqKey="Mcdermott J">J. H. McDermott</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcdermott, J H" uniqKey="Mcdermott J">J. H. McDermott</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Melcher, D" uniqKey="Melcher D">D. Melcher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Melcher, D" uniqKey="Melcher D">D. Melcher</name>
</author>
<author>
<name sortKey="Colby, C L" uniqKey="Colby C">C. L. Colby</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, G A" uniqKey="Miller G">G. A Miller</name>
</author>
<author>
<name sortKey="Licklider, J C R" uniqKey="Licklider J">J. C. R. Licklider</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, L A" uniqKey="Miller L">L. A. Miller</name>
</author>
<author>
<name sortKey="Treat, A E" uniqKey="Treat A">A. E. Treat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mogdans, J" uniqKey="Mogdans J">J. Mogdans</name>
</author>
<author>
<name sortKey="Ostwald, J" uniqKey="Ostwald J">J. Ostwald</name>
</author>
<author>
<name sortKey="Schnitzler, H U" uniqKey="Schnitzler H">H.-U. Schnitzler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morton, E" uniqKey="Morton E">E. Morton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morton, E" uniqKey="Morton E">E. Morton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morton, E S" uniqKey="Morton E">E. S. Morton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morton, E S" uniqKey="Morton E">E. S. Morton</name>
</author>
<author>
<name sortKey="Howlett, J" uniqKey="Howlett J">J. Howlett</name>
</author>
<author>
<name sortKey="Kopysh, N C" uniqKey="Kopysh N">N. C. Kopysh</name>
</author>
<author>
<name sortKey="Chiver, I" uniqKey="Chiver I">I. Chiver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moss, C" uniqKey="Moss C">C. Moss</name>
</author>
<author>
<name sortKey="Surlykke, A" uniqKey="Surlykke A">A. Surlykke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moss, C F" uniqKey="Moss C">C. F. Moss</name>
</author>
<author>
<name sortKey="Surlykke, A" uniqKey="Surlykke A">A. Surlykke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Murphy, C G" uniqKey="Murphy C">C. G. Murphy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nagata, T" uniqKey="Nagata T">T. Nagata</name>
</author>
<author>
<name sortKey="Koyanagi, M" uniqKey="Koyanagi M">M. Koyanagi</name>
</author>
<author>
<name sortKey="Tsukamoto, H" uniqKey="Tsukamoto H">H. Tsukamoto</name>
</author>
<author>
<name sortKey="Saeki, S" uniqKey="Saeki S">S. Saeki</name>
</author>
<author>
<name sortKey="Isono, K" uniqKey="Isono K">K. Isono</name>
</author>
<author>
<name sortKey="Shichida, Y" uniqKey="Shichida Y">Y. Shichida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Naguib, M" uniqKey="Naguib M">M. Naguib</name>
</author>
<author>
<name sortKey="Kunc, H P" uniqKey="Kunc H">H. P. Kunc</name>
</author>
<author>
<name sortKey="Sprau, P" uniqKey="Sprau P">P. Sprau</name>
</author>
<author>
<name sortKey="Roth, T" uniqKey="Roth T">T. Roth</name>
</author>
<author>
<name sortKey="Amrhein, V" uniqKey="Amrhein V">V. Amrhein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Naguib, M" uniqKey="Naguib M">M. Naguib</name>
</author>
<author>
<name sortKey="Wiley, R H" uniqKey="Wiley R">R. H. Wiley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Najemnik, J" uniqKey="Najemnik J">J. Najemnik</name>
</author>
<author>
<name sortKey="Geisler, W S" uniqKey="Geisler W">W. S. Geisler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Najemnik, J" uniqKey="Najemnik J">J. Najemnik</name>
</author>
<author>
<name sortKey="Geisler, W S" uniqKey="Geisler W">W. S. Geisler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nakayama, K" uniqKey="Nakayama K">K. Nakayama</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nakayama, K" uniqKey="Nakayama K">K. Nakayama</name>
</author>
<author>
<name sortKey="He, Z" uniqKey="He Z">Z. He</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelson, B S" uniqKey="Nelson B">B. S. Nelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelson, B S" uniqKey="Nelson B">B. S. Nelson</name>
</author>
<author>
<name sortKey="Stoddard, P K" uniqKey="Stoddard P">P. K. Stoddard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelson, B S" uniqKey="Nelson B">B. S. Nelson</name>
</author>
<author>
<name sortKey="Suthers, R A" uniqKey="Suthers R">R. A. Suthers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelson, D A" uniqKey="Nelson D">D. A. Nelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelson, D A" uniqKey="Nelson D">D. A. Nelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelson, M E" uniqKey="Nelson M">M. E. Nelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neuhoff, J G" uniqKey="Neuhoff J">J. G. Neuhoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nilsson, D" uniqKey="Nilsson D">D. Nilsson</name>
</author>
<author>
<name sortKey="Gislen, L" uniqKey="Gislen L">L. Gislen</name>
</author>
<author>
<name sortKey="Coates, M" uniqKey="Coates M">M. Coates</name>
</author>
<author>
<name sortKey="Skogh, C" uniqKey="Skogh C">C. Skogh</name>
</author>
<author>
<name sortKey="Garm, A" uniqKey="Garm A">A. Garm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="O Onnor, M" uniqKey="O Onnor M">M. O’Connor</name>
</author>
<author>
<name sortKey="Garm, A" uniqKey="Garm A">A. Garm</name>
</author>
<author>
<name sortKey="Nilsson, D E" uniqKey="Nilsson D">D.-E. Nilsson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oliva, A" uniqKey="Oliva A">A. Oliva</name>
</author>
<author>
<name sortKey="Torralba, A" uniqKey="Torralba A">A. Torralba</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palmer, S E" uniqKey="Palmer S">S. E. Palmer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pizlo, Z" uniqKey="Pizlo Z">Z. Pizlo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Poggio, T" uniqKey="Poggio T">T. Poggio</name>
</author>
<author>
<name sortKey="Koch, C" uniqKey="Koch C">C. Koch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pohl, N U" uniqKey="Pohl N">N. U. Pohl</name>
</author>
<author>
<name sortKey="Slabbekoorn, H" uniqKey="Slabbekoorn H">H. Slabbekoorn</name>
</author>
<author>
<name sortKey="Klump, G M" uniqKey="Klump G">G. M. Klump</name>
</author>
<author>
<name sortKey="Langemann, U" uniqKey="Langemann U">U. Langemann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rabiner, L R" uniqKey="Rabiner L">L. R. Rabiner</name>
</author>
<author>
<name sortKey="Juang, B H" uniqKey="Juang B">B.-H. Juang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roberts, L G" uniqKey="Roberts L">L. G. Roberts</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schnitzler, H U" uniqKey="Schnitzler H">H.-U. Schnitzler</name>
</author>
<author>
<name sortKey="Flieger, E" uniqKey="Flieger E">E. Flieger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shi, J" uniqKey="Shi J">J. Shi</name>
</author>
<author>
<name sortKey="Malik, J" uniqKey="Malik J">J. Malik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shinn Cunningham, B G" uniqKey="Shinn Cunningham B">B. G. Shinn-Cunningham</name>
</author>
<author>
<name sortKey="Lee, A K C" uniqKey="Lee A">A. K. C. Lee</name>
</author>
<author>
<name sortKey="Oxenham, A J" uniqKey="Oxenham A">A. J. Oxenham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shy, E" uniqKey="Shy E">E. Shy</name>
</author>
<author>
<name sortKey="Morton, E" uniqKey="Morton E">E. Morton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Siemers, B M" uniqKey="Siemers B">B. M. Siemers</name>
</author>
<author>
<name sortKey="Schnitzler, H U" uniqKey="Schnitzler H">H.-U. Schnitzler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sorjonen, J" uniqKey="Sorjonen J">J. Sorjonen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spitzer, M W" uniqKey="Spitzer M">M. W. Spitzer</name>
</author>
<author>
<name sortKey="Bala, A D S" uniqKey="Bala A">A. D. S. Bala</name>
</author>
<author>
<name sortKey="Takahashi, T T" uniqKey="Takahashi T">T. T. Takahashi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spitzer, M W" uniqKey="Spitzer M">M. W. Spitzer</name>
</author>
<author>
<name sortKey="Takahashi, T T" uniqKey="Takahashi T">T. T. Takahashi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Surlykke, A" uniqKey="Surlykke A">A. Surlykke</name>
</author>
<author>
<name sortKey="Ghose, K" uniqKey="Ghose K">K. Ghose</name>
</author>
<author>
<name sortKey="Moss, C F" uniqKey="Moss C">C. F. Moss</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tarsitano, M S" uniqKey="Tarsitano M">M. S. Tarsitano</name>
</author>
<author>
<name sortKey="Jackson, R R" uniqKey="Jackson R">R. R. Jackson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tatler, B W" uniqKey="Tatler B">B. W. Tatler</name>
</author>
<author>
<name sortKey="Gilchrist, I D" uniqKey="Gilchrist I">I. D. Gilchrist</name>
</author>
<author>
<name sortKey="Land, M F" uniqKey="Land M">M. F. Land</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tatler, B W" uniqKey="Tatler B">B. W. Tatler</name>
</author>
<author>
<name sortKey="Land, M F" uniqKey="Land M">M. F. Land</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thrun, S" uniqKey="Thrun S">S. Thrun</name>
</author>
<author>
<name sortKey="Burgard, W" uniqKey="Burgard W">W. Burgard</name>
</author>
<author>
<name sortKey="Fox, D" uniqKey="Fox D">D. Fox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tommasi, L" uniqKey="Tommasi L">L. Tommasi</name>
</author>
<author>
<name sortKey="Chiandetti, C" uniqKey="Chiandetti C">C. Chiandetti</name>
</author>
<author>
<name sortKey="Pecchia, T" uniqKey="Pecchia T">T. Pecchia</name>
</author>
<author>
<name sortKey="Sovrano, V A" uniqKey="Sovrano V">V. A. Sovrano</name>
</author>
<author>
<name sortKey="Vallortigara, G" uniqKey="Vallortigara G">G. Vallortigara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tsoar, A" uniqKey="Tsoar A">A. Tsoar</name>
</author>
<author>
<name sortKey="Nathan, R" uniqKey="Nathan R">R. Nathan</name>
</author>
<author>
<name sortKey="Bartan, Y" uniqKey="Bartan Y">Y. Bartan</name>
</author>
<author>
<name sortKey="Vyssotski, A" uniqKey="Vyssotski A">A. Vyssotski</name>
</author>
<author>
<name sortKey="Dell Mo, G" uniqKey="Dell Mo G">G. Dell’Omo</name>
</author>
<author>
<name sortKey="Ulanovsky, N" uniqKey="Ulanovsky N">N. Ulanovsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tu, Z" uniqKey="Tu Z">Z. Tu</name>
</author>
<author>
<name sortKey="Zhu, S C" uniqKey="Zhu S">S.-C. Zhu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ulanovsky, N" uniqKey="Ulanovsky N">N. Ulanovsky</name>
</author>
<author>
<name sortKey="Fenton, M B" uniqKey="Fenton M">M. B. Fenton</name>
</author>
<author>
<name sortKey="Tsoar, A" uniqKey="Tsoar A">A. Tsoar</name>
</author>
<author>
<name sortKey="Korine, C" uniqKey="Korine C">C. Korine</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Der Emde, G" uniqKey="Von Der Emde G">G. von der Emde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Der Emde, G" uniqKey="Von Der Emde G">G. von der Emde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Der Emde, G" uniqKey="Von Der Emde G">G. von der Emde</name>
</author>
<author>
<name sortKey="Behr, K" uniqKey="Behr K">K. Behr</name>
</author>
<author>
<name sortKey="Bouton, B" uniqKey="Bouton B">B. Bouton</name>
</author>
<author>
<name sortKey="Engelmann, J" uniqKey="Engelmann J">J. Engelmann</name>
</author>
<author>
<name sortKey="Fetz, S" uniqKey="Fetz S">S. Fetz</name>
</author>
<author>
<name sortKey="Folde, C" uniqKey="Folde C">C. Folde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Der Emde, G" uniqKey="Von Der Emde G">G. von der Emde</name>
</author>
<author>
<name sortKey="Menne, D" uniqKey="Menne D">D. Menne</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Der Emde, G" uniqKey="Von Der Emde G">G. von der Emde</name>
</author>
<author>
<name sortKey="Schnitzler, H U" uniqKey="Schnitzler H">H.-U. Schnitzler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Der Emde, G" uniqKey="Von Der Emde G">G. von der Emde</name>
</author>
<author>
<name sortKey="Schnitzler, H U" uniqKey="Schnitzler H">H.-U. Schnitzler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Der Emde, G" uniqKey="Von Der Emde G">G. von der Emde</name>
</author>
<author>
<name sortKey="Schwarz, S" uniqKey="Schwarz S">S. Schwarz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Uexkull, J" uniqKey="Von Uexkull J">J. von Uexküll</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wade, N" uniqKey="Wade N">N. Wade</name>
</author>
<author>
<name sortKey="Tatler, B W" uniqKey="Tatler B">B. W. Tatler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Waltz, D" uniqKey="Waltz D">D. Waltz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wiley, R H" uniqKey="Wiley R">R. H. Wiley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wiley, R H" uniqKey="Wiley R">R. H. Wiley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wiley, R H" uniqKey="Wiley R">R. H. Wiley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wisniewski, A B" uniqKey="Wisniewski A">A. B. Wisniewski</name>
</author>
<author>
<name sortKey="Hulse, S H" uniqKey="Hulse S">S. H. Hulse</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wolbers, T" uniqKey="Wolbers T">T. Wolbers</name>
</author>
<author>
<name sortKey="Hegarty, M" uniqKey="Hegarty M">M. Hegarty</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wolbers, T" uniqKey="Wolbers T">T. Wolbers</name>
</author>
<author>
<name sortKey="Hegarty, M" uniqKey="Hegarty M">M. Hegarty</name>
</author>
<author>
<name sortKey="Buchel, C" uniqKey="Buchel C">C. Büchel</name>
</author>
<author>
<name sortKey="Loomis, J M" uniqKey="Loomis J">J. M. Loomis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wolbers, T" uniqKey="Wolbers T">T. Wolbers</name>
</author>
<author>
<name sortKey="Klatzky, R L" uniqKey="Klatzky R">R. L. Klatzky</name>
</author>
<author>
<name sortKey="Loomis, J M" uniqKey="Loomis J">J. M. Loomis</name>
</author>
<author>
<name sortKey="Wutte, M G" uniqKey="Wutte M">M. G. Wutte</name>
</author>
<author>
<name sortKey="Giudice, N A" uniqKey="Giudice N">N. A. Giudice</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wong, R Y" uniqKey="Wong R">R. Y. Wong</name>
</author>
<author>
<name sortKey="Hopkins, C D" uniqKey="Hopkins C">C. D. Hopkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yuille, A" uniqKey="Yuille A">A. Yuille</name>
</author>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D. Kersten</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zokoll, M A" uniqKey="Zokoll M">M. A. Zokoll</name>
</author>
<author>
<name sortKey="Klump, G M" uniqKey="Klump G">G. M. Klump</name>
</author>
<author>
<name sortKey="Langemann, U" uniqKey="Langemann U">U. Langemann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zokoll, M A" uniqKey="Zokoll M">M. A. Zokoll</name>
</author>
<author>
<name sortKey="Klump, G M" uniqKey="Klump G">G. M. Klump</name>
</author>
<author>
<name sortKey="Langemann, U" uniqKey="Langemann U">U. Langemann</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="review-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24744740</article-id>
<article-id pub-id-type="pmc">3978336</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2014.00199</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Review Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Scene analysis in the natural environment</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Lewicki</surname>
<given-names>Michael S.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/32060"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Olshausen</surname>
<given-names>Bruno A.</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/8096"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Surlykke</surname>
<given-names>Annemarie</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/12232"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Moss</surname>
<given-names>Cynthia F.</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/12230"></uri>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Department of Electrical Engineering and Computer Science, Case Western Reserve University</institution>
<country>Cleveland, OH, USA</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Helen Wills Neuroscience Institute, School of Optometry, Redwood Center for Theoretical Neuroscience, University of California at Berkeley</institution>
<country>Berkeley, CA, USA</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Department of Biology, University of Southern Denmark</institution>
<country>Odense, Denmark</country>
</aff>
<aff id="aff4">
<sup>4</sup>
<institution>Department of Psychology and Institute for Systems Research, University of Maryland</institution>
<country>College Park, MD, USA</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by:
<italic>Laurence T. Maloney, Stanford University, USA</italic>
</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by:
<italic>Wolfgang Einhauser, Philipps-Universität Marburg, Germany; Sébastien M. Crouzet, Charité University Medicine, Germany</italic>
</p>
</fn>
<corresp id="fn001">*Correspondence:
<italic>Michael S. Lewicki, Department of Electrical Engineering and Computer Science, Case Western Reserve University, Glennan Building, Room 321, 10900 Euclid Avenue, Cleveland, OH 44106-7071, USA e-mail:
<email xlink:type="simple">michael.lewicki@case.edu</email>
</italic>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Perception Science, a section of the journal Frontiers in Psychology.</p>
</fn>
</author-notes>
<pub-date pub-type="epreprint">
<day>03</day>
<month>1</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>01</day>
<month>4</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>5</volume>
<elocation-id>199</elocation-id>
<history>
<date date-type="received">
<day>01</day>
<month>11</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>20</day>
<month>2</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2014 Lewicki, Olshausen, Surlykke and Moss.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals.</p>
</abstract>
<kwd-group>
<kwd>active perception</kwd>
<kwd>auditory streaming</kwd>
<kwd>echolocation</kwd>
<kwd>vision</kwd>
<kwd>electroreception</kwd>
<kwd>scene analysis</kwd>
<kwd>top-down processes</kwd>
<kwd>neuroethology</kwd>
</kwd-group>
<counts>
<fig-count count="6"></fig-count>
<table-count count="0"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="173"></ref-count>
<page-count count="21"></page-count>
<word-count count="0"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro">
<title>INTRODUCTION</title>
<p>In recent decades, research on scene analysis has advanced in many different fields. Perceptual studies have characterized the many cues that contribute to scene analysis capabilities. Computational approaches have made great strides in developing algorithms for processing real-world scenes. Animal behavior and neurobiological studies have investigated animal capabilities and neural representations of stimulus features. In spite of these advances, we believe there remain fundamental limitations in many of the ways scene analysis is defined and studied, and that these will continue to impede research progress until these shortcomings are more widely recognized and new approaches are devised to overcome them. The purpose of this article is to identify these shortcomings and to propose a framework for studying scene analysis that embraces the complex problems that animals need to solve in the natural environment.</p>
<p>A major limitation we see in current approaches is that they do not acknowledge or address the complexity of the problems that need be solved. Experiments based on simplistic, reflexive models of animal behavior, or with the implicit assumption of simple feature detection schemes, have little chance of providing insight into the mechanisms of scene analysis in complex natural settings. An additional limitation lies with the extensive use of “idealized” stimuli and stripped down tasks that yield results which are often difficult to generalize to more ecologically relevant stimuli and behaviors. For example, scene analysis experiments designed around auditory tone bursts are of limited value in helping us to understand how complex acoustic patterns such as speech are separated from noisy acoustic backgrounds. Visual grouping experiments using bar-like stimuli are a far cry from the situation a predator faces in detecting and tracking prey in a complex visual environment. At the same time, computational approaches in the engineering and computer science community, although often applied to natural scenes, have provided only limited insight into scene perception in humans and other animals. The disconnect here is due to the fact that tasks are chosen according to certain technological goals that are often motivated by industrial applications (e.g., image search) where the computational goals are different from those in more ecologically relevant settings. In the neuroscience community, studies of animal behavior and physiology have focused largely on specific stimulus features or assume feedforward processing pipelines that do not address the more complex set of problems required for extraction of these stimulus features in natural scenes.</p>
<p>Here we argue for a view of scene analysis that is broader, more ecological, and which encompasses the diverse set of problems faced by animals. We believe the study of animals is essential for advancing our understanding of scene analysis because, having evolved in complex environments, they have developed a wide range of robust solutions to perceptual problems that make optimal tradeoffs between performance and resource constraints. This premise is important, because it provides a means to develop testable theories and predictive models that do not just describe a set of phenomena, but are based on optimal solutions to well-defined computational goals. Our perspective is similar to that advocated by
<xref rid="B96" ref-type="bibr">Marr (1982)</xref>
: scene analysis is fundamentally an information processing task and encompasses a complex set of interrelated problems. What those problems are, however, remain poorly understood. Therefore one of the primary research objectives must be to identify the problems that need to be solved.</p>
</sec>
<sec>
<title>CURRENT APPROACHES TO SCENE ANALYSIS</title>
<p>Here, we examine some of the current perceptual and computational approaches to studying scene analysis. We highlight concepts in these areas that are fundamental to scene analysis and that can inform future experimental and computational approaches. We also discuss the limitations that make current concepts inadequate for understanding scene analysis in the natural environment. Our discussion is guided by two fundamental questions: how do these approaches inform us about the problems that humans and other animals actually need to solve? How do they inform us about the computations required for the analysis of natural scenes?</p>
<sec>
<title>PERCEPTUAL STUDIES OF SCENE ANALYSIS</title>
<sec>
<title>Grouping and selective attention</title>
<p>Many perceptual approaches to scene analysis have their roots in the Gestalt studies of visual grouping and related problems of selective attention. These cast scene analysis as a problem of perceptual organization where different elements of the image must be appropriately grouped or “parsed,” e.g., into foreground vs. background or local features belonging to the same contour or surface (
<xref rid="B130" ref-type="bibr">Palmer, 1999</xref>
). Likewise in audition, scene analysis is commonly viewed as the problem of organizing the features in the input into different auditory streams (
<xref rid="B21" ref-type="bibr">Bregman, 1990</xref>
).</p>
<p>While these lines of research have led to the discovery of a wealth of intriguing phenomena, grouping by itself does not yield a representation of objects or 3D scene structure that is adequate to guide purposeful actions. Although it seems reasonable that it may make the problem easier, this assumption is rarely tested in practice. The main problem we face in scene analysis is the
<italic>interpretation</italic>
of sensory information. We see the figure in the well-known Dalmatian dog image not by determining which blob goes with which other, or by separating foreground from background, but by arriving at an interpretation of the relation among 2D blobs that is consistent with a 3D scene – i.e., it not just the dog, but other aspects of the scene such as the outlines of sidewalk, the shadow cast by an adjacent tree, that are all perceived at once (
<xref rid="B171" ref-type="bibr">Yuille and Kersten, 2006</xref>
). Similar arguments apply to visual feature or region segmentation strategies (e.g.,
<xref rid="B137" ref-type="bibr">Shi and Malik, 2000</xref>
). These only give the illusion of solving the problem, because we as observers can look at the result and interpret it. But who is looking at such representations inside the brain? How does the ability to group texture regions or outline the bounding contour of an object within a 2D image translate into directing action toward a point in space? How does it help you navigate a complex 3D scene? It is questionable whether currently studied subproblems of perceptual organization and grouping bring us any closer to understanding the structure of the scene or the mechanisms of its analysis.</p>
<p>Another limitation in many approaches to grouping and attention, and experimental stimuli in general, is that they presume a set of artificial features that do not capture the complexity of natural scenes, e.g., fields of oriented bars or, in audition, sequences of tone pips (
<xref rid="B15" ref-type="bibr">Bergen and Julesz, 1983</xref>
;
<xref rid="B21" ref-type="bibr">Bregman, 1990</xref>
). What most models fail to address is that attending to a feature or grouping a region of the scene is rarely sufficient to solve a perceptual problem, because the desired information (such as object or surface structure) is entangled in complex ways with other structures in the scene, e.g., occluding surfaces, shadows, and the visual (or auditory) background. How does the ability to group tone sequences translate into our ability to perceive speech and other sounds in natural soundscapes? In natural scenes, it is not obvious what the important features are or how to extract them, although more recent work has begun to explore the perceptual organization of contours in natural scenes (
<xref rid="B53" ref-type="bibr">Geisler et al., 2001</xref>
;
<xref rid="B45" ref-type="bibr">Elder and Goldberg, 2002</xref>
;
<xref rid="B99" ref-type="bibr">McDermott, 2004</xref>
), and the role of the context of the natural visual scene itself (
<xref rid="B129" ref-type="bibr">Oliva and Torralba, 2007</xref>
).</p>
<p>Biological systems have evolved in the natural environment where the structure of sensory signals is highly complex. The use of simplified artificial stimuli and tasks are thus testing the system far outside the domain in which it has evolved to operate. The problem with this approach is that the system we seek to characterize is highly non-linear, but unlike linear systems there is no universal way to characterize such systems in terms of a reduced set of functions. The question we need to be asking is, how can we preserve the ecological complexity and relevance of the input and task while still introducing informative experimental manipulations? And for the experimental simplifications and idealizations we choose, can we show that the results generalize to more natural settings?</p>
</sec>
<sec>
<title>Spatial perception</title>
<p>Animals must act in a 3D world, so the recovery of 3D spatial information sufficient to guide behavior is a fundamental problem in scene analysis. The numerous cues that contribute to 3D spatial perception, such as depth, shape, and spatial layout have been well-studied (
<xref rid="B63" ref-type="bibr">Harris and Jenkin, 2011</xref>
), however most are studied in isolation or in “idealized” settings that bypass more complex processes of scene analysis (see, e.g.,
<xref rid="B38" ref-type="bibr">Cutting and Vishton, 1995</xref>
). In natural images, local cues such as disparity or motion are usually highly ambiguous and difficult to estimate reliably. Moreover, many widely studied models are carried out almost exclusively in terms of artificial features embedded in a flat, 2D scene, and it is not clear how these results inform us about scene analysis in the realm of complex 3D scenes typical of sensory experience. Spatial audition is an equally important aspect of scene analysis for both humans and animals, but we have only a limited understanding of how spatial auditory cues such as timing and intensity differences could be extracted from complex acoustic environments (see, e.g.,
<xref rid="B18" ref-type="bibr">Blauert, 1997</xref>
;
<xref rid="B126" ref-type="bibr">Neuhoff, 2004</xref>
).</p>
<p>The extraction of low-level features such as disparity or inter-aural timing differences is only the beginning of a complex inference process in scene analysis. A more fundamental issue is the question of what types of spatial information animals need to derive from the scene and how these are represented and integrated with other information sources. Visual cues of spatial layout, such as disparity or motion parallax, are retinocentric and cannot directly drive action without accounting for the movements and positions of the eyes, head, and body (
<xref rid="B101" ref-type="bibr">Melcher, 2011</xref>
). It is often assumed – either implicitly or explicitly – that simply having a representation of the depth of each point in a scene provides a sufficient representation of 3D space. But as
<xref rid="B119" ref-type="bibr">Nakayama et al. (1995)</xref>
have noted, this is not necessarily the case:</p>
<disp-quote>
<p>Because we have a two-dimensional retina and because we live in a three-dimensional world, many have seen the problem of space perception as the recovery of the third dimension…. Yet there are reasons to think that [Euclidean geometry] is not the manner in which spatial distance is encoded in the visual system. Perceptual psychologist
<xref rid="B58" ref-type="bibr">Gibson (1966)</xref>
argues that space is not perceived in this way but in terms of the surfaces that fill space. The most important and ecologically relevant surface is the ground plane. In Gibson’s view, Euclidian distances between arbitrary points in three-dimensional space are not biologically relevant (see also
<xref rid="B118" ref-type="bibr">Nakayama, 1994</xref>
). We see our world in terms of surfaces and plan our actions accordingly.</p>
</disp-quote>
<p>Currently, we have a very limited understanding of how surfaces might be computed and represented or to what extent this constitutes an adequate representation of the natural scene.</p>
</sec>
<sec>
<title>Active perception</title>
<p>Scene analysis can also be viewed as an active process that gathers information about the scene (
<xref rid="B6" ref-type="bibr">Ballard, 1991</xref>
), often in a task-driven manner. This is in contrast to the more standard, passive view that overlooks the contribution of goal-directed action. Hypotheses about the systems goals are essential for gaining insight into underlying computational principles. Active perception models lend themselves to more clearly defined goals, because information is gathered for the task at hand, such as in gaze control, visual search, guidance of limb movement, or locomotion over the immediate terrain (
<xref rid="B57" ref-type="bibr">Gibson, 1958</xref>
;
<xref rid="B66" ref-type="bibr">Henderson and Hollingworth, 1999</xref>
;
<xref rid="B88" ref-type="bibr">Lappe et al., 1999</xref>
;
<xref rid="B87" ref-type="bibr">Land and Tatler, 2009</xref>
;
<xref rid="B90" ref-type="bibr">Lederman and Klatzky, 2009</xref>
;
<xref rid="B167" ref-type="bibr">Wolbers and Hegarty, 2010</xref>
). These studies move closer to ecological relevance, but identifying what drives the acquisition of specific information under natural conditions, and how information is integrated across saccades or probing actions to appropriately guide action, remain open problems.</p>
</sec>
</sec>
<sec>
<title>COMPUTATIONAL APPROACHES TO SCENE ANALYSIS</title>
<p>Computational approaches to problems of scene analysis began decades ago – Gibson first published his ideas on the process of visual scene analysis in the 1950s (
<xref rid="B56" ref-type="bibr">Gibson, 1950</xref>
,
<xref rid="B57" ref-type="bibr">1958</xref>
); the cocktail party problem was also first described and studied around the same time by
<xref rid="B32" ref-type="bibr">Cherry (1953)</xref>
, when the first speech recognition systems were being built at Bell Labs (
<xref rid="B40" ref-type="bibr">Davis et al., 1952</xref>
). Yet, even today many aspects of scene analysis are still open problems in machine vision and speech processing. Why has it been so hard? Problems can be hard because the right way to approach them is not understood or hard in the sense of computational complexity. Scene analysis is hard for both reasons. Nearly 30 years after these early investigations,
<xref rid="B96" ref-type="bibr">Marr (1982)</xref>
noted that</p>
<disp-quote>
<p>…in the 1960s almost no one realized that machine vision was difficult…the idea that extracting edges and lines from images might be at all difficult simply did not occur to those who had not tried to do it. It turned out to be an elusive problem. Edges that are of critical importance from a three-dimensional point of view often cannot be found at all by looking at the intensity changes in an image. Any kind of textured image gives a multitude of noisy edge segments; variations in reflectance and illumination cause no end of trouble; and even if an edge has a clear existence at one point, it is as likely as not to fade out quite soon, appearing only in patches along its length in the image.</p>
</disp-quote>
<p>Evidently, there is a vast gulf between our introspective notions of how we perceive scenes and our realization of what is actually needed to accomplish this. Computational models thus force us to be grounded in our reasoning by testing our assumptions about the types of representations needed for solving a task, and exposing what works and what does not.</p>
<sec>
<title>Ill-posed problems</title>
<p>A formal mathematical reason scene perception is a difficult computational problem is that it is
<italic>ill-posed</italic>
(
<xref rid="B132" ref-type="bibr">Poggio and Koch, 1985</xref>
;
<xref rid="B131" ref-type="bibr">Pizlo, 2001</xref>
;
<xref rid="B100" ref-type="bibr">McDermott, 2009</xref>
), meaning that there is not enough sensory data available to arrive at a unique solution, and often there are a very large number of possible solutions. Ambiguity in the raw sensory input can only be resolved using
<italic>a priori</italic>
knowledge about scene structure (
<xref rid="B131" ref-type="bibr">Pizlo, 2001</xref>
;
<xref rid="B78" ref-type="bibr">Kersten and Yuille, 2003</xref>
;
<xref rid="B171" ref-type="bibr">Yuille and Kersten, 2006</xref>
).</p>
<p>An early paradigm for demonstrating the importance of structural knowledge in visual scene analysis was the “blocks world” where the objective is to parse or segment the scene by grouping local 2D edges and junctions into separate (3D) structures (
<xref rid="B135" ref-type="bibr">Roberts, 1965</xref>
;
<xref rid="B162" ref-type="bibr">Waltz, 1975</xref>
; for a more recent perspective, see
<xref rid="B50" ref-type="bibr">Frisby and Stone, 2010</xref>
). Because the role of a local feature is ambiguous and there are a combinatorial number of possible groupings, the problem is not trivial. Although this is a highly reduced problem (and therefore artificial), one of the key insights from this research was that the ambiguity in “bottom-up” information can be resolved by using “top-down” knowledge of structural relationships and optimization algorithms to find the best solutions. More sophisticated approaches can, for example, recover object shapes or rectangular room layouts from photographs (
<xref rid="B69" ref-type="bibr">Hoiem and Savarese, 2011</xref>
).</p>
<p>Some of the most successful approaches to recovering 3D structure use prior knowledge regarding the geometry of corresponding points in the left and right images (or more generally multiple images;
<xref rid="B64" ref-type="bibr">Hartley and Zisserman, 2004</xref>
). These methods mainly recover the Euclidean coordinates of points in the scene, which is of limited relevance to biology, but the underlying mathematics provides a fundamental statement of what information is required, such as inference of the observer’s position and motion in addition to the scene structure. Recovering more complex shapes and surfaces remains an area of active research, but some recent model-based approaches can accurately infer complex 3D shapes such as faces or animals from real images (
<xref rid="B17" ref-type="bibr">Blanz and Vetter, 2003</xref>
;
<xref rid="B30" ref-type="bibr">Cashman and Fitzgibbon, 2013</xref>
).</p>
<p>In the auditory domain, successful scene analysis approaches also make extensive use of statistical inference methods to solve ill-posed problems (
<xref rid="B134" ref-type="bibr">Rabiner and Juang, 1993</xref>
;
<xref rid="B60" ref-type="bibr">Gold et al., 2011</xref>
). The early history of speech recognition was focused largely on feature detection, which is ill-posed due to both the complexity of speech and the presence of other interfering sounds and background noise. The use of hidden Markov models, which integrate temporal context, and statistical learning and inference methods allowed for much more accurate recognition even though the low-level feature representations remained crude and ambiguous. The best systems have proved to be those with the best prior models (
<xref rid="B39" ref-type="bibr">Darwin, 2008</xref>
). Recent speech systems became the first to surpass human performance in a specialized recognition task involving simultaneous masking speech (
<xref rid="B37" ref-type="bibr">Cooke et al., 2010</xref>
). Inference in these models is hierarchical: each level of features, phonemes, words, and sentences tries to deduce the most probable sequence of uncertain elements using both
<italic>a priori</italic>
knowledge (such as the voice, vocabulary, and grammar) and ongoing contextual information.</p>
<p>Ill-posed inference problems can also be approached sequentially where information is actively gathered, as in active perception. In robotics, a well-studied problem domain is simultaneous localization and mapping (SLAM;
<xref rid="B148" ref-type="bibr">Thrun et al., 2005</xref>
). Here, a robot must use its sensors (typically distance sensors like sonar or laser scanners) to both determine its location in the environment and map out the environment itself. The problem is ill-posed because both the initial position of the robot and the structure of the scene are unknown, and (due to noise) neither the sensors nor the actuators provide precise information about distance or movements. In spite of these challenges, probabilistic approaches have been successful in real-world domains by using statistical inference techniques to build up an accurate representation of the scene from multiple samples and intelligently probe the scene to resolve ambiguity.</p>
<p>Computational approaches to scene analysis inference problems have been successful when they have good prior models of how scenes are generated, which allows accurate interpretation of what would otherwise be highly ambiguous local features. Could they provide insight into biological systems? So far, we would argue they have not. One problem is that these algorithms have focused only on the computational problem and have not been formulated in a way that makes it possible to draw correspondences with neural systems. Another problem is ecological relevance: the choice of problems was not motivated by trying to understand animal behavior, but rather by specific technological goals. Often these problems are defined in narrow settings and are highly simplified to make them tractable (e.g., the blocks world), whereas biology must employ solutions that work in the full complexity of the natural environment. This type of robustness has remained elusive for the majority of computational approaches.</p>
</sec>
<sec>
<title>Segmentation and grouping</title>
<p>Computationally, grouping can be viewed either as a problem of grouping the correct features or of finding the correct segmentation. For sounds, the problem is to separate mixed sound sources or group features of a single source, as in auditory streaming. This is a difficult problem because in general there are a combinatorial number of groupings. Nevertheless there have been significant advances in developing computational algorithms to find an optimal partitioning of a complex scene from low-level features (
<xref rid="B137" ref-type="bibr">Shi and Malik, 2000</xref>
;
<xref rid="B151" ref-type="bibr">Tu and Zhu, 2002</xref>
;
<xref rid="B97" ref-type="bibr">Martin et al., 2004</xref>
). Most approaches, however, yield a segmentation in terms of the 2D image, which does not reliably provide information about the scene
<italic>per se</italic>
. For speech and audio, computational auditory scene analysis models based on auditory grouping cues have improved recognition (
<xref rid="B37" ref-type="bibr">Cooke et al., 2010</xref>
), although doing so for natural sounds in real acoustic environments remains a challenge. Techniques such as blind source separation (
<xref rid="B13" ref-type="bibr">Bell and Sejnowski, 1995</xref>
;
<xref rid="B72" ref-type="bibr">Hyvarinen et al., 2004</xref>
) can de-mix arbitrary sounds but only under very restrictive assumptions. None of these approaches, however, match the robustness of biological systems.</p>
<p>Segmentation is often considered to be a pre-requisite for recognition (discussed in more detail below), but that need not be the case. An alternative approach, popular in the machine vision community, bypasses explicit segmentation by identifying a sparse set of “keypoints” – features that are both informative and invariant under different scenes or views of the object (
<xref rid="B93" ref-type="bibr">Lowe, 1999</xref>
,
<xref rid="B94" ref-type="bibr">2004</xref>
). With a good set of keypoint features, it is possible to match them against a database to do recognition that is robust to changes in scale, rotation, and background. An analogous approach in speech and audio recognition is “glimpsing” (
<xref rid="B103" ref-type="bibr">Miller and Licklider, 1950</xref>
;
<xref rid="B36" ref-type="bibr">Cooke, 2006</xref>
). Here instead of attempting to separate the source from the background by auditory stream segmentation, one attempts to identify spectro-temporal regions where the source target is not affected by the background sounds. This can be effective when both signals are sparse, such as in mixed speech, when only a subset of features is necessary for recognition.</p>
</sec>
<sec>
<title>Object recognition</title>
<p>Any given scene contains a multitude of objects, so the process of scene analysis is often interrelated with object recognition. In computer vision, object recognition is usually treated as a labeling problem in which each object within a scene is assigned a semantic label and a bounding box that specifies its 2D location in the image. Object recognition is sometimes generalized to “scene understanding” in the field of computer vision, where the task is to segment and label all the different parts of the scene, i.e., all the objects and background, often in a hierarchical manner. The standard approach to solving recognition problems is based on extracting 2D image features and feeding them to a classifier, which outputs the object category. Despite some degree of success over the past decade, these methods have not provided much insight into object recognition and scene analysis as it occurs in animals. One problem is that the task of recognition has been defined too narrowly – primarily as one of mapping pixels to object labels. A label, however, is not sufficient to drive behavior. Many behaviors require knowing an object’s 3D location, pose, how it is situated within the scene, its geometric structure (shape), and other properties needed for interacting with the object.</p>
<p>Another problem with casting recognition as a labeling or categorization problem is that the issue of representation is rarely addressed. Animals, including humans, likely recognize objects using representations that encode the 3D structure of objects in some type of viewpoint invariant form (
<xref rid="B16" ref-type="bibr">Biederman, 1987</xref>
;
<xref rid="B44" ref-type="bibr">Edelman, 1999</xref>
;
<xref rid="B77" ref-type="bibr">Kersten et al., 2004</xref>
). Recent research in the computer vision community has begun to form 3D representations directly using laser scanners (
<xref rid="B62" ref-type="bibr">Gould et al., 2008</xref>
). This improves recognition rates and makes segmentation easier but falls far short of biological relevance, because the representation of 3D structure is still based on point clouds in Euclidean space. Biological representations are likely to be adapted to the structure of natural shapes but currently we do not have models of such representations. Such models could provide testable hypotheses and valuable insights into the nature of the structural representations in biological systems.</p>
</sec>
</sec>
<sec>
<title>SUMMARY</title>
<p>What emerges from the discussion above is that scene analysis in natural settings encompasses several types of computational problems. While one cannot give a precise definition, it is possible to identify some common principles, which we will explore and develop further below. The first is the importance of solving ill-posed problems, which is very different from the paradigm of (simple) feature extraction underlying many current approaches. Extracting specific information from a complex scene is inherently ambiguous and is only soluble with strong prior information. Another is the importance of grounding representations in 3D space – as opposed to the intrinsic coordinate system of the sensory array – in a manner that drives behavior. Scene analysis must also actively integrate information over time, e.g., by directing the eyes in order to gather specific information for the task at hand, as studied by
<xref rid="B87" ref-type="bibr">Land and Tatler (2009)</xref>
. Below we explore these issues further in the context of the scene analysis behaviors of a diverse range of animals.</p>
</sec>
</sec>
<sec>
<title>SCENE ANALYSIS IN ANIMALS</title>
<p>Against this backdrop of current perceptual and computational approaches to scenes analysis, let us now examine the actual problems faced by animals in the natural environment. We examine four animals in particular which highlight the scene analysis problems solved in different modalities: vision, audition, echolocation, and electrolocation. Because each of these animals must survive in complex environments, they must have developed robust solutions to problems in scene analysis (
<xref rid="B12" ref-type="bibr">Bee and Micheyl, 2008</xref>
). Thus,
<italic>studying these and other animals provides a means to learn what is required for scene analysis in the natural environment</italic>
. What problems do animals need to solve to carry out their natural behaviors? What are their limits and capabilities, the strategies they use, and the underlying neural circuits and representations?</p>
<p>While the field of neuroethology has long studied a wide range of animal systems, issues relevant to scene analysis have received much less attention. A basic premise of neuroethology is to study a system that is specialized in a behavior of interest, e.g., sound localization in barn owls, animals that forage by listening to the sounds generated by prey. A rich history of research has revealed much about basic perceptual cues and their underlying neural correlates (e.g., timing and intensity differences in sound localization), but, as with many systems, issues in scene analysis have only begun to be addressed. Most studies are designed to manipulate perceptual cues in isolation, which do not require scene analysis, and it has yet to be determined whether such results generalize to more complex natural settings. The history of computational approaches discussed above would suggest that they do not, because the introduction of scene analysis opens a whole new class of problems. Some behaviors can be guided by a few simple cues, e.g., instinctive “key” stimuli eliciting specific responses; others are more complex and require sophisticated analysis. The animals we have chosen provide concrete examples of systems that have to solve difficult problems in scene analysis. We do not yet know the extent to which animals are able to perform scene analysis, because key investigations have yet to be conducted.</p>
<p>For each animal model described below, we consider the range of problems they need to solve, and the extent to which current findings inform us about these animals’ perception of natural scenes and general issues in scene analysis. The point we pursue is a neuroethological one, namely that commonalities in diverse systems can often provide insight into fundamental problems that are of broad relevance.</p>
<sec>
<title>VISUAL SCENE ANALYSIS IN THE JUMPING SPIDER</title>
<p>The jumping spider (
<bold>Figure
<xref ref-type="fig" rid="F1">1A</xref>
</bold>
) exhibits a wide variety of visually mediated behaviors that exemplify many of the key problems of scene analysis. In contrast to other spiders, which use a web to extend their sensory space, the jumping spider relies mainly upon its highly elaborate visual system to scan the environment and localize prey, to recognize mates, and to navigate complex 3D terrain. In fact it exhibits many of the same attentional behaviors of predatory mammals (
<xref rid="B85" ref-type="bibr">Land, 1972</xref>
;
<xref rid="B73" ref-type="bibr">Jackson and Pollard, 1996</xref>
).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>
<bold> (A)</bold>
Jumping spider (
<italic>Habronattus</italic>
),
<bold>(B)</bold>
jumping spider visual system, showing antero-median, antero-lateral, and posterior-lateral eyes (A,B from Tree of Life, Copyright 1994 Wayne Maddison, used with permission).
<bold>(C,D)</bold>
Orienting behavior of a 1-day-old jumping spider stalking a fruit fly. Adapted from video taken by Bruno Olshausen and Wyeth Bair.</p>
</caption>
<graphic xlink:href="fpsyg-05-00199-g001"></graphic>
</fig>
<p>The visual system consists of four pairs of eyes (
<bold>Figure
<xref ref-type="fig" rid="F1">1B</xref>
</bold>
): one pair of frontal facing principal eyes (antero-median eyes) provide a high-resolution image over a narrow field of view, while the other three pairs provide lower resolution images over a wider field of view and are mounted on different parts of the head so as to provide 360° coverage of the entire visual field (
<xref rid="B81" ref-type="bibr">Land, 1985</xref>
). Interestingly, the retinae of the antero-median eyes are highly elongated in the vertical direction so as to form a 1D image array. These retinae move from side to side within the head in a smooth (approximately 1 Hz) scanning motion that sweeps its 1D retinae across the object of interest (
<xref rid="B84" ref-type="bibr">Land, 1969</xref>
). The jumping spider uses its wide-field, low resolution system to detect moving targets or objects of interest, and then orients its body to focus the high resolution antero-median eyes for more detailed spatial analysis via scanning. In mammals these dual aspects of visual function are built into a single pair of eyes in which resolution falls off with eccentricity, whereas in the jumping spider they are subdivided among different eyes. Note that such multiple eye arrangements are not unique to jumping spiders, but can be found in other animals such as the box jellyfish, which has a total of 24 eyes surveying different parts of the scene (
<xref rid="B127" ref-type="bibr">Nilsson et al., 2005</xref>
;
<xref rid="B128" ref-type="bibr">O’Connor et al., 2009</xref>
). The fact that invertebrates with limited brain capacity “transfer” the scene coverage to eyes with different optical properties is strong evidence of the importance of sensor fusion. Scene analysis is not simply a matter of looking at
<italic>the</italic>
<italic>image</italic>
that falls upon the retina; rather, the brain must assemble disparate bits and pieces of sensory information (in this case, from different eyes) into a coherent representation of the external environment that is sufficient to drive actions.</p>
<sec>
<title>Active perception, segmentation, and tracking during prey capture</title>
<p>Jumping spiders feed mainly upon flies and other insects or spiders. Hunting behavior consists of three steps: (1) a moving object is detected and elicits an orienting response (see
<bold>Figures
<xref ref-type="fig" rid="F1">1C,D</xref>
</bold>
). (2) The target object is then analyzed by the high-resolution, antero-median eyes by scanning. If the object moves during this period the antero-median eyes will also move to track it in a smooth pursuit motion. (3) If the target object is determined to be potential prey, the jumping spider will then engage in a stalking behavior in which it slowly advances forward, crouched to the ground, presumably to avoid detection, prior to raising its front legs and pouncing on the object. Thus, the jumping spider has different behavioral states that dramatically alter its perceptual processing and consequent actions.</p>
<p>These behaviors illustrate an active process of scene analysis, whereby the high resolution eyes – which have narrow field of view – are steered toward a salient item detected by the other eyes. As the object moves, the head and eyes move as appropriate to track the item and keep it in the field of view for high resolution analysis. The scanning motion of the antero-median retinae is used to determine what the object is (see below) and elicit the appropriate action. For a prey item, the spider must estimate the distance for pouncing (possibly using the antero-lateral eyes which have binocular overlap). For all of these tasks,
<italic>the prey item must be appropriately separated from the background which is likely to be highly cluttered and contain other moving objects</italic>
. The use of multiple eyes to mediate one coherent set of actions also illustrates an integrative process of scene analysis, whereby information from different sources (in this case, different eyes) is combined toward a common goal. How this is accomplished is not known, but the neuroanatomy shows that while each eye has its own ganglion for initial processing, the signals from these different ganglia eventually converge within the jumping spider’s central brain.</p>
</sec>
<sec>
<title>Object recognition in mate selection</title>
<p>Jumping spiders exhibit a stereotypical courtship behavior in which the male performs a “dance” – consisting of waving the legs or moving from side to side in a specific manner – to attract the attention of a female and gain acceptance prior to copulation. It has been shown that purely visual cues are sufficient to induce these dances.
<xref rid="B43" ref-type="bibr">Drees (1952)</xref>
used a collection of line drawings to find the nominal visual cues necessary to induce courtship behavior and found that the most effective stimulus was in fact a drawing that depicts the most salient features of a spider – a body with legs. It has also been shown that the video image of a conspecific female is sufficient to trigger courtship behavior, and the video image of a conspecific male performing a dance is sufficient to elicit receptivity behavior in the female (
<xref rid="B34" ref-type="bibr">Clark and Uetz, 1990</xref>
).</p>
<p>These studies demonstrate that the male jumping spider performs complex visual pattern recognition in order to detect the presence and assess the suitability of the female. Females must be capable of at least as much sophistication as they must also perceive and recognize the dance movements of the male. Each party must also maintain its attention during this interaction by holding its gaze (antero-median eyes) on the other. Importantly, since the image of each spider subtends a 2D area, the scanning motion of the 1D retinae is crucial to recognition. It has also been observed that the range of scanning is matched to the visual extent of the target (
<xref rid="B84" ref-type="bibr">Land, 1969</xref>
). This scanning strategy again exemplifies an active process of scene analysis, whereby the representations necessary for recognition are built up by moving the sensor array across the scene. It also exemplifies an integrative process, whereby the time-varying photoreceptor activities that result from scanning are accumulated into a stable representation of the object of interest. The image of potential mates must also be separated or disentangled from the background to be seen. Many jumping spiders have colorful markings, presumably for this purpose, but under what conditions of background clutter or occlusion, or under what variations of distance and pose, spiders can be successfully recognized has not been investigated.</p>
</sec>
<sec>
<title>3D scene analysis during spatial navigation</title>
<p>Jumping spiders hunt in complex 3D environments in which there may not be a straightforward path for reaching a targeted prey item. When hunting within foliage, for example, the spider may find and localize prey on another branch or object that is not within direct jumping reach, but which requires taking a detour. This detouring behavior has been studied extensively in the species
<italic>Portia fimbriata</italic>
. It appears that these spiders are capable of analyzing complex 3D layouts in their environment that allow them to plan and execute the proper route to reach prey, even when it requires initially moving away from the target.
<xref rid="B145" ref-type="bibr">Tarsitano and Jackson (1994)</xref>
studied this behavior by placing a prey item in one of two trays that are reachable only by traversing a bent metal rod rising from the ground plane (e.g., as if perched on the leaf of a plant which can only be reached by climbing up the stalk of the plant). The spider is then placed on a pedestal in the center of the arena and begins scanning its environment from this position by repeatedly fixating its principal eyes on objects in its environment. It then commences movement down from its pedestal to the ground plane, and then toward the rod that leads to the prey item irrespective of whether it is in the opposite direction or on the opposite side of the arena. The spider continues its pursuit even though the prey item is no longer visible once the spider moves to the ground. It has also been shown that while en route to a prey item, a jumping spider will occasionally re-orient toward the item by turning its head, and that the angle of these re-orienting turns matches the correct, updated position of the prey item given the distance traveled (
<xref rid="B67" ref-type="bibr">Hill, 1979</xref>
).</p>
<p>These behaviors illustrate another important process of scene analysis, which is the ability to form persistent representations of the 3D layout of a scene appropriate for path planning and navigation. The jumping spider must identify not only the target and its direction, but also the ground plane and traversable objects that lead to the target. Planning and executing a movement to the target requires spatial memory and dynamic updating in order to stay on an appropriate path, even when the goal is not in sight and no longer in the original direction seen.</p>
<p>It is impressive – perhaps even surprising – that such cognitive abilities can be found in a “simple” animal. However when one considers the complexity of the dynamic, natural environment and what is required for robust behavior, these scene analysis capabilities would be essential. Importantly, these abilities lie far beyond what may be achieved by modern computer vision or robotic systems, especially in terms of robustness, which is a testament to the complexity of the computational problems that must be solved.</p>
</sec>
</sec>
<sec>
<title>AUDITORY SCENE ANALYSIS IN SONGBIRDS</title>
<p>Birdsong serves multiple functions, including mate attraction, mate and species recognition, and territorial defense (
<xref rid="B5" ref-type="bibr">Ball and Hulse, 1998</xref>
;
<xref rid="B164" ref-type="bibr">Wiley, 2000</xref>
;
<xref rid="B95" ref-type="bibr">Marler and Slabbekoorn, 2004</xref>
;
<xref rid="B31" ref-type="bibr">Catchpole and Slater, 2008</xref>
), all of which require the analysis of complex acoustic scenes. Songbirds communicate over long distances (50–200 m) in noisy environments, and although acoustic structure of birdsong is adapted to better stand out from the background (
<xref rid="B141" ref-type="bibr">Sorjonen, 1986</xref>
;
<xref rid="B123" ref-type="bibr">Nelson, 1988</xref>
,
<xref rid="B124" ref-type="bibr">1989</xref>
;
<xref rid="B133" ref-type="bibr">Pohl et al., 2009</xref>
), the acoustic scene is often cluttered with many types of animal vocalizations (
<xref rid="B165" ref-type="bibr">Wiley, 2009</xref>
) – in some rain forests, there can be as many as 400 species of birds in a square kilometer (
<xref rid="B31" ref-type="bibr">Catchpole and Slater, 2008</xref>
). Similar auditory scene analysis problems are solved by other animals: king penguin chicks,
<italic>Aptenodytes patagonicus</italic>
, use vocal cues to recognize and locate their parents in dense colonies (
<xref rid="B3" ref-type="bibr">Aubin and Jouventin, 1998</xref>
); female frogs face similar acoustic challenges during mate selection (
<xref rid="B48" ref-type="bibr">Feng and Schul, 2006</xref>
;
<xref rid="B54" ref-type="bibr">Gerhardt and Bee, 2006</xref>
). These and other examples in animal communication have many parallels to the classic cocktail party problem, i.e., recognizing speech in complex acoustic environments (
<xref rid="B32" ref-type="bibr">Cherry, 1953</xref>
;
<xref rid="B23" ref-type="bibr">Brémond, 1978</xref>
;
<xref rid="B70" ref-type="bibr">Hulse et al., 1997</xref>
;
<xref rid="B10" ref-type="bibr">Bee and Klump, 2004</xref>
;
<xref rid="B12" ref-type="bibr">Bee and Micheyl, 2008</xref>
).</p>
<sec>
<title>Scene analysis in territorial defense</title>
<p>Auditory scene analysis is crucial for songbirds in one of their primary behaviors: territorial defense. Songs are used as acoustic markers that serve as warning signals to neighboring rivals. From acoustic cues alone, songbirds must keep track of both the identities and positions of their territorial neighbors to recognize when a rival has trespassed (
<bold>Figure
<xref ref-type="fig" rid="F2">2</xref>
</bold>
). Localization accuracy in both direction and distance is important because if they do not fight off an invader they risk losing ground, but an excess of false alarms or poor estimates would waste time and energy.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>
<bold>Territorial prospecting.</bold>
Songbirds use song as acoustic territorial markers to serve as a warning to potential invaders and rely on sound for locating other birds in complex acoustic scenes and natural environments. The black dots indicate the positions of established singing territorial males, most of which would be audible from any one position, in addition to numerous other sounds. The black line shows the prospecting path of a translocated and radio-tagged male nightingale. Hatched areas indicate reed, bushes, or woods separated by fields and meadows. Figure from
<xref rid="B114" ref-type="bibr">Naguib et al. (2011)</xref>
which is based on data from
<xref rid="B1" ref-type="bibr">Amrhein et al. (2004)</xref>
.</p>
</caption>
<graphic xlink:href="fpsyg-05-00199-g002"></graphic>
</fig>
<p>The accuracy and robustness of species-specific song recognition and localization can benefit from higher-level knowledge of song structure. Songbirds can localize acoustic sources in natural habitats accurately even in noisy environments (
<xref rid="B79" ref-type="bibr">Klump, 2000</xref>
). The precise acoustic cues songbirds use remain unclear. Their small head size provides minimal timing and intensity differences normally used for lateralization judgments, and the reliability of these cues does not predict their level of accuracy in natural settings (
<xref rid="B121" ref-type="bibr">Nelson and Stoddard, 1998</xref>
). One explanation is that the birds can make use of higher-level knowledge to help disambiguate the acoustic cues (
<xref rid="B121" ref-type="bibr">Nelson and Stoddard, 1998</xref>
;
<xref rid="B120" ref-type="bibr">Nelson, 2002</xref>
;
<xref rid="B122" ref-type="bibr">Nelson and Suthers, 2004</xref>
). It is also likely that songbirds make use of an interaural canal to localize sound sources (
<xref rid="B80" ref-type="bibr">Klump and Larsen, 1992</xref>
;
<xref rid="B89" ref-type="bibr">Larsen, 2004</xref>
). Songbirds (and other birds) also exhibit the precedence effect which may serve to minimize interferences from echoes and reverberation (
<xref rid="B41" ref-type="bibr">Dent and Dooling, 2004</xref>
;
<xref rid="B142" ref-type="bibr">Spitzer et al., 2004</xref>
;
<xref rid="B143" ref-type="bibr">Spitzer and Takahashi, 2006</xref>
). Sound localization in songbirds may employ all of these mechanisms and requires some degree of scene analysis, because the acoustic cues for localization must be separated from the clutter of other sounds and the background noise.</p>
<p>Estimating the distance of singing birds is difficult because it is not clear if this information can be derived from generic acoustic cues (
<xref rid="B108" ref-type="bibr">Morton, 1986</xref>
;
<xref rid="B115" ref-type="bibr">Naguib and Wiley, 2001</xref>
;
<xref rid="B25" ref-type="bibr">Brumm and Naguib, 2009</xref>
). To judge distance of a conspecific, a bird must assess the level of degradation in a song (in terms of frequency-dependent attenuation and reverberation) after it has propagated through the environment. This suggests that songbirds make use of higher-level knowledge, and there is evidence that songbirds are more accurate in ranging familiar songs (
<xref rid="B108" ref-type="bibr">Morton, 1986</xref>
,
<xref rid="B106" ref-type="bibr">1998a</xref>
,
<xref rid="B107" ref-type="bibr">b</xref>
;
<xref rid="B139" ref-type="bibr">Shy and Morton, 1986</xref>
;
<xref rid="B163" ref-type="bibr">Wiley, 1998</xref>
;
<xref rid="B109" ref-type="bibr">Morton et al., 2006</xref>
).</p>
<p>A representation of the territorial space is necessary for locating and tracking the positions of other songbirds, and it must also properly account for the bird’s own orientation and movement within the territory. Songbirds can remember the spatial location of an intruder and can accurately estimate its range even after playback has ended (
<xref rid="B109" ref-type="bibr">Morton et al., 2006</xref>
). This representation is also likely to be integrative because conditions are noisy, and it is not always possible to localize accurately from a single instance of song. Any form of triangulation from multiple instances of a rival song from different locations would also require an integrative representation. There is evidence that songbirds experience spatial unmasking when overlapping sounds arrive from different directions (
<xref rid="B42" ref-type="bibr">Dent et al., 2009</xref>
), suggesting that they can perform scene analysis using both acoustic features and spatial location of sound sources.</p>
</sec>
<sec>
<title>Song recognition and auditory source separation</title>
<p>Songbirds recognize the songs of their territorial neighbors, responding more aggressively to unfamiliar strangers (
<xref rid="B11" ref-type="bibr">Bee, 2006</xref>
), and they retain this ability in the presence of acoustic clutter such as the songs of other birds (
<xref rid="B70" ref-type="bibr">Hulse et al., 1997</xref>
;
<xref rid="B166" ref-type="bibr">Wisniewski and Hulse, 1997</xref>
;
<xref rid="B2" ref-type="bibr">Appeltants et al., 2005</xref>
). This also requires a representation of song structure which is likely to be learned because European starlings show persistent memory for both tonal signals and amplitude modulations (
<xref rid="B172" ref-type="bibr">Zokoll et al., 2007</xref>
,
<xref rid="B172" ref-type="bibr">2007</xref>
). Starlings also show long-term memory for individuals (
<xref rid="B19" ref-type="bibr">Braaten, 2000</xref>
) suggesting their representation of acoustic targets (i.e., other songs) is highly adaptive.</p>
<p>A number of studies have investigated the extent to which songbirds can process song in the presence of background noise and interfering acoustic clutter.
<xref rid="B23" ref-type="bibr">Brémond (1978)</xref>
used speakers to broadcast conspecific songs of wrens within their territorial boundaries. The normal territorial response was largely unaffected when the songs were masked with a variety of equally loud stimuli, including heterospecific songs, a stimulus composed of randomized 50 ms fragments of conspecific song, or a mixture of eight natural wren songs. None of the maskers presented in isolation elicited a response, suggesting that the wrens were adept at identifying an intruding song even in the presence of a significant amount of acoustic clutter. These abilities do have limits, however. An earlier study of species recognition by
<xref rid="B22" ref-type="bibr">Brémond (1976)</xref>
found that Bonelli’s warblers could not identify their own song when it was masked with inverted elements from the same song.</p>
<p>A series of experiments by Hulse and colleagues (
<xref rid="B70" ref-type="bibr">Hulse et al., 1997</xref>
;
<xref rid="B71" ref-type="bibr">Hulse, 2002</xref>
) showed that European starlings could accurately recognize conspecific song (demonstrated by key pecking) in the presence of a variety of other songs and a noisy dawn chorus. Importantly, the birds were trained so they never heard the target song type in isolation, i.e., the starling songs were always mixed with a song of another species. The birds then had to compare this pair to another mixed song pair of two different species. Not only could the birds accurately classify the song pairs that contained starling song, but they also generalized with no additional training to song pairs with novel songs and to songs presented in isolation. Further studies (
<xref rid="B166" ref-type="bibr">Wisniewski and Hulse, 1997</xref>
) showed that the starlings were also capable of accurately discriminating song segments from two individual starlings, even when each was masked with song segments from up to four other starlings (
<bold>Figure
<xref ref-type="fig" rid="F3">3</xref>
</bold>
). These studies suggest that the birds were not perceiving the song pairs as fused auditory objects and learning the feature mixtures, but recognized the target song by segregating it from other acoustic stimuli with very similar structure. Similar results have been observed in zebra finches (
<xref rid="B14" ref-type="bibr">Benney and Braaten, 2000</xref>
).</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>
<bold>Auditory source separation by starlings.</bold>
The top panel shows a spectrogram from a 10-s segment of typical starling song (
<italic>Sturnus vulgaris</italic>
). In an experiment by
<xref rid="B166" ref-type="bibr">Wisniewski and Hulse (1997)</xref>
, starlings were trained to discriminate one of 10 song segments produced by Starling A from 10 song segments produced by Starling B. The birds maintained discrimination when the target song was mixed with a novel distractor song from Starling C (middle) and in the presence of four novel conspecific songs (bottom), a feat which human listeners could not do, even after training. Figure after
<xref rid="B166" ref-type="bibr">Wisniewski and Hulse (1997)</xref>
using songs from the Macaulay Library of the Cornell Lab of Ornithology.</p>
</caption>
<graphic xlink:href="fpsyg-05-00199-g003"></graphic>
</fig>
</sec>
<sec>
<title>Active perceptual behaviors</title>
<p>Songbirds also take a variety of actions to facilitate scene analysis (
<xref rid="B25" ref-type="bibr">Brumm and Naguib, 2009</xref>
). They perch in locations and positions that maximize the range of the acoustic signal (while still avoiding predation); they sing more repetitions when there is higher background noise; they also avoid overlapping their songs (
<xref rid="B24" ref-type="bibr">Brumm, 2006</xref>
). These compensatory behaviors are not unique to songbirds, but are also shared by other animals that rely on acoustic communication such as penguins and frogs (
<xref rid="B3" ref-type="bibr">Aubin and Jouventin, 1998</xref>
;
<xref rid="B112" ref-type="bibr">Murphy, 2008</xref>
).</p>
<p>The broad range of songbird behavior carried out in complex acoustic environments strongly suggests that songbirds successfully perform several aspects of scene analysis. There also remains the possibility that songbirds achieve these feats via simpler means. For example, songbirds might recognize song by key features or “glimpsing” (
<xref rid="B103" ref-type="bibr">Miller and Licklider, 1950</xref>
;
<xref rid="B36" ref-type="bibr">Cooke, 2006</xref>
), where it is only necessary to get an occasional unmasked “glimpse” of some part of the song in order to recognize it. This could be tested by controlling the masking songs in a systematic way, but doing so requires a detailed model of the song recognition process which has remained elusive. It seems more likely that for recognition (and mate selection by females), the acoustic elements of song need to be not only separated from the clutter of other sounds and the noise of the background but also grouped correctly over time. Supporting this is the observation that songbirds exhibit temporal induction of missing song segments (
<xref rid="B20" ref-type="bibr">Braaten and Leary, 1999</xref>
). The intricate structure of birdsong and the growing evidence that songbirds use higher-level structural knowledge suggest that songbirds perform auditory scene analysis and spatial auditory perception in a manner that is analogous to contextual inference and auditory grouping in speech recognition. Furthermore, spatial memory, path planning, and active perception are essential aspects of territorial defense and mate selection. Together these span many general aspects of scene analysis discussed above and highlight new ways songbirds could be studied to gain general insights into scene analysis.</p>
</sec>
</sec>
<sec>
<title>ACTIVE AUDITORY SCENE ANALYSIS IN BATS</title>
<p>Studies of echolocating bats can shed light on general problems of scene analysis that lie far outside the realm of human experience. These animals employ a high resolution, active sensing system to extract information from the natural environment based on the echoes from calls emitted by the bat. The features of a bat’s echolocation calls impact the acoustic information it receives to build its auditory scene, and therefore, detailed study of sonar call parameters provides direct access to the signals used by an animal to perceive its 3D environment. Importantly, the bat’s active motor adjustments reveal how the animal deals with ill-posed perceptual problems, where echo snapshots carry incomplete and ambiguous information about the natural scene.</p>
<p>The high spatial resolution and active components of bat echolocation offer a special opportunity to (1) identify general principles of scene analysis that bridge hearing and vision, and (2) analyze the very acoustic signals used by bats to represent the natural scene, which can, in turn, inform principles of scene analysis in auditory generalists, including humans. We elaborate below on these key features of echolocation as we summarize empirical findings from the literature.</p>
<p>The bat’s sonar scene consists of echoes reflecting from targets (flying insects, stationary fruit, or other food items) and clutter (vegetation and other objects) and background (the ground). Oftentimes the echoes the bat encounters from a complex scene contain incomplete or ambiguous information about object features and location: (1) cascades of overlapping echoes from targets and clutter may be difficult to assign to corresponding sonar objects, (2) very short duration echolocation calls return information about a dynamic environment within only a restricted slice in time, (3) directional sonar calls return information from a restricted region in space, (4) rapid attenuation of ultrasound limits the operating range of echolocation, and (5) signals produced by nearby bats and other animals can interfere with the processing of echoes, but perhaps also be exploited for extra information. Nonetheless, echolocating bats overcome these challenges to successfully navigate and forage using biological sonar.</p>
<sec>
<title>Spatial scene perception</title>
<p>To successfully track a selected prey item and avoid collision with other objects, the bat must localize and organize complex 3D acoustic information and coordinate this representation with motor planning on very rapid time scales. For example, when a bat is seeking insect prey in the vicinity of vegetation, each sonar call returns echoes from the target of interest, along with echoes from branches, leaves, and other objects in the vicinity (
<bold>Figure
<xref ref-type="fig" rid="F4">4</xref>
</bold>
). The resulting echo streams carry information about the changing distance and direction of objects in space. By integrating information over time, the bat can sort and track target echoes in the midst of clutter. This scene analysis task is aided by their active control over sonar call design, along with head and pinna position. Some bats, for example,
<italic>Myotis septentrionalis</italic>
(
<xref rid="B104" ref-type="bibr">Miller and Treat, 1993</xref>
) or
<italic>Myotis nattereri</italic>
(
<xref rid="B140" ref-type="bibr">Siemers and Schnitzler, 2004</xref>
) use very short and often extremely broad band frequency modulated (FM) calls to sharpen up the representation of closely spaced objects, which enables them to distinguish prey from clutter. The nasal-emitting bat
<italic>Micronycteris microtis</italic>
(Phyllostomidae) can detect and seize completely motionless prey sitting on leaves; this is among the most difficult auditory segregation tasks in echolocation, which may rely on learning and top-down processing (
<xref rid="B52" ref-type="bibr">Geipel et al., 2013</xref>
). In addition, recent psychophysical studies suggest that bats using FM calls may experience acoustic blur from off-axis echoes, due to frequency-dependent directionality of sonar signals and the dependence of auditory response latencies on echo amplitude. This off-axis “blur” could serve to minimize clutter interference in the natural environment (
<xref rid="B8" ref-type="bibr">Bates et al., 2011</xref>
).</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>
<bold>Bat scene analysis.</bold>
Schematic illustrating how echoes from different objects in the path of the bat’s sonar beam form acoustic streams with changing delays over time.
<italic>Upper panel</italic>
: The cartoon shows a bat pursuing an insect in the vicinity of three trees at different distances. The numbers indicate the positions of the bat and insect at corresponding points in time. Color-coded arcs illustrate an echo from the red tree at position 1 (A) and echoes from the blue and red trees at position 3 (B and C).
<italic>Lower panel</italic>
: Echoes from the insect (thick gray lines) and each of the trees (red, blue, and green) arrive at changing delays (
<italic>x</italic>
-axis) over time (right
<italic>y</italic>
-axis) as the bat flies in pursuit of its prey. Each sonar vocalization (not shown) results in a cascade of echoes from objects in the path of the sound beam, which arrive at different delays relative to vocalization onset. Time to capture proceeds from top to bottom. At time 0, the bat captures the insect. The numbers 1–4 to the left of the
<italic>y</italic>
-axis indicate the times of the corresponding bat and insect positions in the cartoon. The thin horizontal gray lines display the echo returns from successive vocalizations which change in duration as the bat moves from search to approach to terminal buzz phases (left
<italic>y</italic>
-axis). Echoes are displayed as color-coded open rectangles to illustrate the relative arrival times from the insect and each of the trees. The letters A, B, and C link selected echoes to the arcs in the cartoon above. The duration of echoes, indicated by the width of rectangles, changes proportionately with the duration of sonar calls and appear as narrow ridges when call duration is very short during the approach and buzz phases. Note that the delay of echoes from the red tree and blue tree initially decrease over time, and later increase after the bat flies past them. Adapted from
<xref rid="B111" ref-type="bibr">Moss and Surlykke (2010)</xref>
.</p>
</caption>
<graphic xlink:href="fpsyg-05-00199-g004"></graphic>
</fig>
<p>Bats that produce long constant frequency signals combined with short FM sweeps (CF–FM), such as the greater horseshoe bat,
<italic>Rhinolophus ferrumequinum</italic>
, solve the problem of finding prey in dense vegetation by listening for Doppler frequency and amplitude modulations in echo returns that are introduced by the fluttering wings of insect (
<xref rid="B136" ref-type="bibr">Schnitzler and Flieger, 1983</xref>
;
<xref rid="B157" ref-type="bibr">von der Emde and Schnitzler, 1986</xref>
;
<xref rid="B156" ref-type="bibr">von der Emde and Menne, 1989</xref>
). They can also use this acoustic information to recognize insect prey (
<xref rid="B158" ref-type="bibr">von der Emde and Schnitzler, 1990</xref>
), suggesting that auditory scene analysis by echolocation builds on prior knowledge.</p>
</sec>
<sec>
<title>Active perception in sonar scene analysis</title>
<p>The fast, maneuverable flight of bats requires not only a detailed representation of the natural scene to discriminate between foreground (prey, conspecifics, and clutter) and background (landscape, ground, large structures like trees, houses, rocks), but also very rapid updates to take into account their own movements, as well as those of the prey and conspecifics. How does the bat accomplish this daunting auditory scene analysis task with ambiguous echo information and on a millisecond time scale? Part of the answer to this question lies in this animal’s adaptive vocal-motor control of sonar gaze and frequency.</p>
</sec>
<sec>
<title>Scene analysis through gaze control</title>
<p>Detailed analyses of the big brown bat’s echolocation behavior has revealed that this animal sequentially scans auditory objects in different directions, by moving the axis of its sonar beam and inspects objects at different distances, by making range-dependent adjustments in the duration of its calls (
<xref rid="B55" ref-type="bibr">Ghose and Moss, 2003</xref>
;
<xref rid="B144" ref-type="bibr">Surlykke et al., 2009</xref>
;
<xref rid="B47" ref-type="bibr">Falk et al., 2011</xref>
). Bats also adjust the width of the sonar beam to the situation, to use a broader “acoustic field of view” close to clutter than out in the open where a narrow “long range” beam is advantageous (
<xref rid="B74" ref-type="bibr">Jakobsen et al., 2013</xref>
). The bat’s active adjustments in the direction and distance of its sonar “gaze” help the bat resolve perceptual ambiguities in the sonar scene by sampling different regions in space. Sonar beam aim also indicates where in space the bat is attending, and suggests parallels with eye movements and visual gaze (
<xref rid="B86" ref-type="bibr">Land and Hayhoe, 2001</xref>
). This observation also suggests that the bat uses working and short-term memory to assemble a spatial representation of the environment from a series of echo snapshots from different locations (
<xref rid="B110" ref-type="bibr">Moss and Surlykke, 2001</xref>
;
<xref rid="B144" ref-type="bibr">Surlykke et al., 2009</xref>
). It has also been demonstrated that pinna movements of echolocating bats that use CF sonar signals serve to enhance echo information for spatial localization (
<xref rid="B105" ref-type="bibr">Mogdans et al., 1988</xref>
;
<xref rid="B51" ref-type="bibr">Gao et al., 2011</xref>
). Together, the bat’s active control over sonar call features, head direction, and pinna position contribute to solving the computational problem of sorting sounds arriving from different directions and distances.</p>
</sec>
<sec>
<title>Scene analysis through sound frequency control</title>
<p>When bats forage together in groups, they face a “cocktail party” challenge of sorting echoes generated by their own sonar calls from the signals and echoes of neighboring bats. A recent laboratory study investigated this problem by studying the acoustic behavior of pairs of echolocating big brown bats (
<italic>Eptesicus fuscus</italic>
) competing for a single prey item (
<xref rid="B33" ref-type="bibr">Chiu et al., 2009</xref>
). The results of this study show that the bat makes adjustments in the spectral characteristics of its FM calls when flying with conspecifics. Importantly, the magnitude of these adjustments depends on the baseline similarity of calls produced by the individual bats when flying alone: bats that produce sonar calls with similar spectrum (or frequency structure) make larger adjustments in their sonar calls than those bats whose baseline call designs were already dissimilar. This suggests that simple frequency cues may be sufficient to reduce perceptual ambiguities, and the separation of frequency features of sonar calls produced by different bats aids each individual to segregate echoes of its own sonar vocalizations from the acoustic signals of neighboring bats (see
<xref rid="B152" ref-type="bibr">Ulanovsky et al., 2004</xref>
;
<xref rid="B9" ref-type="bibr">Bates et al., 2008</xref>
).</p>
<p>
<xref rid="B68" ref-type="bibr">Hiryu et al. (2010)</xref>
report that big brown bats flying through an array of echo reflecting obstacles make frequency adjustments between alternating sonar calls to tag time dispersed echoes from a given sonar call with spectral information. Many other bats normally alternate between frequencies from call to call (
<xref rid="B75" ref-type="bibr">Jung et al., 2007</xref>
). These findings suggest that bats may treat a cascade of echoes following each sonar vocalization as one complete view of the auditory scene. If the integrity of one view of the acoustic scene is compromised by overlap of one echo cascade with the next, the bat changes its call frequencies to create the conditions for segregating echoes associated with a given sonar vocalization, thus providing additional evidence for the bat’s active adjustments in signal frequency to resolve ambiguity in assigning echoes from objects at different distances.</p>
</sec>
<sec>
<title>Broader perspectives on scene analysis offered by echolocation</title>
<p>Echo reflecting objects are in effect sound sources, whose acoustic characteristics are shaped by the bat’s active control over its sonar signals. The bat’s active sensing allows us to directly measure the signals used by an animal to resolve perceptual ambiguities that arise in scene analysis problems. Furthermore, the bat’s adaptive adjustments in sonar call direction, intensity, duration, timing, and frequency emphasize the importance of these acoustic parameters to specific scene analysis tasks, and suggest parallel processes for the cues used in natural scene perception by other animals, including humans.</p>
</sec>
</sec>
<sec>
<title>SCENE ANALYSIS IN THE ELECTRIC FISH</title>
<p>As with the echo-locating bat, the manner in which electric fish perceive the surrounding environment provides a clear example of scene analysis principles at work that is divorced from human introspection. The sensory world of the electric fish consists largely of distortions of its self-generated electric field, in addition to the electric fields generated by other fish (
<xref rid="B125" ref-type="bibr">Nelson, 2011</xref>
). Although still equipped with a visual system, electroreception has been shown to be the dominant sense used for foraging, orientation and communication tasks for these animals. The electrical environment contains targets such as prey items or other fish which must be detected against complex backgrounds, and it must navigate through complex terrain (see
<bold>Figure
<xref ref-type="fig" rid="F5">5</xref>
</bold>
).</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>
<bold>Scene analysis in electroreception.</bold>
The “electric image” of the external environment is determined by the conductive properties of surrounding objects. The electric field emanates from the electric organ in the tail region (gray rectangle) and is sensed by the electroreceptive skin areas, using two electric “foveas” to actively search and inspect objects. Shown are the field distortions created by two different types of objects: a plant that conducts better than water, above (green) and a non-conducting stone, below (gray). (Redrawn from
<xref rid="B65" ref-type="bibr">Heiligenberg, 1977</xref>
).</p>
</caption>
<graphic xlink:href="fpsyg-05-00199-g005"></graphic>
</fig>
<p>In mormyrids, the electric field is generated by the electric organ residing in the caudal peduncle (tail region), which generates a relatively uniform electric field over the anterior body surface where most electroreceptors are located (
<xref rid="B154" ref-type="bibr">von der Emde, 2006</xref>
). An “electric image” of the external environment is formed on the electroreceptor array according to how physical objects distort the electric field, as shown in
<bold>Figure
<xref ref-type="fig" rid="F5">5</xref>
</bold>
. An object that is a good conductor relative to water will cause electric field lines to bunch up, creating a positive difference in the electric field on the corresponding portion of the electroreceptor array. Conversely, a poor conductor relative to water will cause electric fields lines to disperse, creating a negative difference in the electric field pattern on the electroreceptor array.</p>
<p>In addition to conductance, the capacitive properties of an object may also be ascertained by how it changes the waveform of the electric organ discharge (EOD). The EOD itself is composed of a series of pulses, each of which has a characteristic waveform, typically less than 1 ms duration. In mormyrids, a copy of the EOD signal is sent to electrosensory areas of the brain. Thus, it is possible for the animal to directly compare the sensed signal with that which was actually generated. An object with low or no capacitance, such as a non-living object, will leave the waveform shape unaffected. Most living objects however, such as insect larvae, other fish, and plants possess complex impedances, and so they will significantly alter the waveform shape, which behavioral studies show is detectable by the animal (
<xref rid="B154" ref-type="bibr">von der Emde, 2006</xref>
).</p>
<p>Due to the high conductivity of water, the range over which the electric fish can sense objects is only a few centimeters. Nevertheless, electroreception mediates a wide range of scene analysis behaviors important to the animal’s survival, which we describe here.</p>
<sec>
<title>Object recognition in electric scenes</title>
<p>The mormyrid’s object recognition and discrimination abilities have been explored through behavioral studies (
<xref rid="B159" ref-type="bibr">von der Emde and Schwarz, 2002</xref>
;
<xref rid="B153" ref-type="bibr">von der Emde, 2004</xref>
;
<xref rid="B155" ref-type="bibr">von der Emde et al., 2010</xref>
). By assessing performance on simple association tasks, it has been shown that electric fish are capable of discriminating the shape of objects (e.g., cube vs. pyramid), even against complex and variable backgrounds. Doing so is non-trivial because the electric fields from multiple objects will superimpose and create a seemingly complex electric image on the electroreceptor array. Thus, the animal must solve a figure-ground problem similar to that in vision or audition, in which the sensory contributions of background or clutter must be discounted in order to properly discern an object. Perhaps even more impressive is the fact that the animal can generalize to recognize different shapes independent of their material properties (metal or plastic) or distance. It can discriminate small from large objects, irrespective of distance. Thus, the animal is capable of extracting invariances in the environment from the complex electroreceptor activities – i.e., despite variations due to material properties or distance, it can nevertheless make correct judgments about the shape and size of objects.</p>
</sec>
<sec>
<title>Active perception during foraging</title>
<p>When foraging for food, mormyrids utilize their two electric “foveas” in an active manner to search and inspect objects. The two foveas are composed of a high density region of electroreceptors, one on the nasal region, and the other on the so-called
<italic>Schnauzenorgan</italic>
(
<xref rid="B4" ref-type="bibr">Bacelo et al., 2008</xref>
). Unknown objects are first approached and inspected by the ventral nasal organ, and then more finely inspected by the Schnauzenorgan (
<xref rid="B154" ref-type="bibr">von der Emde, 2006</xref>
). When foraging, the animal engages in a stereotypical behavior in which it bends its head down at 28° such that the nasal fovea is pointing forward or slightly upward, and it scans the Schnauzenorgan from side to side across the surface to search for prey. When a prey item is detected (presumably from its capacitive properties) it is inspected by the Schnauzenorgan before the fish sucks in its prey. Thus, the animal must correctly interpret the highly dynamic patterns of activity on the sensory surface in accordance with this scanning movement in order to properly detect and localize prey. This is an example of an active process demanding the coordination of perception and action.</p>
</sec>
<sec>
<title>Spatial navigation</title>
<p>Mormyrids are frequently seen swimming backward, and they avoid obstacles with ease, finding their way through crevices in rocks (
<xref rid="B92" ref-type="bibr">Lissmann, 1958</xref>
). Presumably these abilities are mediated by the electric sense, since the eyes, which are poorly developed, are at the front of the animal. They are also known to navigate at night in complete darkness (
<xref rid="B153" ref-type="bibr">von der Emde, 2004</xref>
). Thus, it would appear that electric fish can obtain a sufficient representation of 3D scene layout from the electric field in order to plan and execute maneuvers around objects. How accurate and what form this representation takes is not known, but it has been shown through behavioral studies that they can judge the distance to an object from the spatial pattern across the electroreceptor array (
<xref rid="B153" ref-type="bibr">von der Emde, 2004</xref>
). A hypothesized mechanism for doing this is by calculating the slope to amplitude ratio, i.e., the rate of change in the electric field across the surface divided by the maximum.</p>
</sec>
<sec>
<title>Communication in electric scenes</title>
<p>In addition to sensing distortions in the electric field caused by other objects, electric fish also detect the electric fields generated by other fish. In mormyrids, the waveform of the EOD is used for communicating species, sex, and social status, while the sequences of pulse intervals (SPIs) is used for communicating rapidly changing behavioral states and motivation (
<xref rid="B28" ref-type="bibr">Carlson, 2002</xref>
;
<xref rid="B29" ref-type="bibr">Carlson and Hopkins, 2004</xref>
;
<xref rid="B170" ref-type="bibr">Wong and Hopkins, 2007</xref>
). During the breeding season, males of many species have a longer EOD than females and often have a sex-specific waveform. During courtship, they may produce high-frequency bursts termed “rasps,” while during overt aggression they may produce “pulse pairs.” Conditioning experiments demonstrate that they are also able to distinguish individual differences in the EOD, supporting a potential role in individual recognition. Thus, a rich array of structural information regarding the identity and intentions of other animals is available in the EOD, and this structure must be properly extracted and analyzed in order to make appropriate behavioral decisions. Importantly, these signals must be properly separated from the background variations in the electric “scene” used for detecting prey and navigation, in addition to the contributions of multiple surrounding fish.</p>
</sec>
</sec>
</sec>
<sec>
<title>COMMON PRINCIPLES IN NATURAL SCENE ANALYSIS</title>
<p>The animals discussed above exhibit a wide variety of scene analysis capabilities that enable robust behavior in complex environments. What lessons can we draw from these examples to guide our study of scene analysis? One, which we shall expand upon below, is that the ability to extract information from complex, natural scenes is paramount, yet far beyond what is commonly addressed in laboratory studies that simplify the stimulus or task. Another is that all of these abilities still lie far beyond current computational algorithms, which means that we lack essential conceptual frameworks for studying them. Just as the principles of optics guides our study of eyes, principles of information processing –
<italic>most of which have yet to be discovered</italic>
– will be needed to study how scene analysis is carried out in animals.</p>
<p>To distill what we have learned both from the review of current approaches and the discussion of animal capabilities above, we develop a framework around a set of common properties that enable scene analysis in the natural environment:</p>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>The ability to solve
<italic>ill-posed problems</italic>
inherent in extracting scene properties from raw sensory inputs</p>
</list-item>
<list-item>
<label>2.</label>
<p>The ability to optimally
<italic>integrate</italic>
and
<italic>store</italic>
information across time and modality</p>
</list-item>
<list-item>
<label>3.</label>
<p>Efficient recovery and representation of
<italic>3D scene structure</italic>
</p>
</list-item>
<list-item>
<label>4.</label>
<p>Optimal
<italic>motor actions</italic>
that guide the acquisition of information to progress toward behavioral goals</p>
</list-item>
</list>
<p>Our understanding of how each of these is accomplished remains incomplete, but we conjecture that each is an essential aspect of the larger problem of scene analysis. These points are further elaborated below.</p>
<sec>
<title>COMPONENTS OF A NATURAL SCENE</title>
<p>Before delving into the properties of scene analysis, it is useful to first spend some time considering the different components of the scene itself (
<bold>Figure
<xref ref-type="fig" rid="F6">6</xref>
</bold>
). We will define these generically so that they apply across modality and to a wide range of animals and tasks. The
<italic>target</italic>
(black blob) represents an object (or information) in the scene to which the animal’s attention or behavior is directed. This could be a fly for the jumping spider or the song of a rival songbird. It is important to distinguish between the target in the natural scene and what is often dubbed the “sensory stimulus” in the laboratory setting. In a natural setting, the stimulus is presented in the context of the entire sensory scene. The target can also be defined more generally to represent information the animal needs to acquire, e.g., location of the neighboring bird or its fitness. Thus, the target is rarely extracted directly from the sensory input – the necessary information must be
<italic>inferred</italic>
.
<italic>Clutter</italic>
(gray blobs) generically represent false targets or other components of the scene that could be confused with the target, such as other flying insects while the bat is pursuing a moth or songs of other birds. This interference vastly increases the complexity of processing, because many problems become ill-posed. This is why camouflage is an effective adaptation. In the extreme case of high clutter and sparse stationary targets, animals face a complex search task, – a natural scene version of a “Where’s Waldo?” game.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>
<bold>A schematic framework for scene analysis.</bold>
The arc labeled sensory input represents all the sensory information available to the system, possibly from different modalities. To the left of this arc is an abstract representation of the different components of the external scene: target, clutter, terrain, and background, each of which are processed in different ways depending on the particular scene analysis task. These are depicted as being spatially distinct, but they need not be. To the right of the sensory input arc is the general set of processing components (or stages) underlying biological scene analysis. Each node represents a different level of abstraction and is hypothesized to play a distinct role in the overall system but need not correspond to distinct brain areas. Not all animals use every component because animals have a range of perceptual capabilities and specializations. Arrows represent the flow of information between components, with a double arrow indicating that information can flow in both directions. The arrow going to the sensory input arc represents the “action” of the system and the output of the largest dynamic loop. The animal’s motor actions make it progress toward the behavioral goals, but also change the sensory input in order to gain more information about the scene.</p>
</caption>
<graphic xlink:href="fpsyg-05-00199-g006"></graphic>
</fig>
<p>For most animals, the ability to approach or pursue a target is as important as locating it. For example, the jumping spider must determine how to traverse the local terrain and navigate around obstacles to get within jumping distance of prey. We refer to these scene components generically as
<italic>terrain</italic>
(large blobs). The processing and inference of terrain information is very different from that of the target, which is often confined to a small region of the scene. Terrain, in contrast, tends to be more extended. Successful locomotion depends on extracting sufficient information about the terrain shape for accurate foot placement, or sufficient information about the size and location of obstacles for accurate path and movement planning. In vision, terrain is typically the ground, which is usually stationary, but obstacles could be either fixed or moving, such as swaying branches and foliage during echolocation.</p>
<p>The
<italic>background</italic>
of a scene refers to the remaining structure that is not processed for terrain or for locating the target. While this aspect of the scene might provide useful sensory information for other behavioral functions, from the viewpoint of processing in scene analysis we generally consider the background to be a source of noise that generically degrades the information about the target or terrain, although it can also provide important contextual information that influences perception (
<xref rid="B129" ref-type="bibr">Oliva and Torralba, 2007</xref>
).</p>
</sec>
<sec>
<title>COMPONENTS OF NATURAL SCENE ANALYSIS</title>
<p>We decompose the scene analysis process into a set of general components shown to the right of the sensory arc in
<bold>Figure
<xref ref-type="fig" rid="F6">6</xref>
</bold>
. The diagram takes form of the standard perception–action cycle (
<xref rid="B160" ref-type="bibr">von Uexküll, 1926</xref>
;
<xref rid="B59" ref-type="bibr">Gibson, 1979</xref>
) with some elaborations for scene analysis. Nodes represents different components of analysis, but these do not correspond to distinct neural substrates.</p>
<p>The first level of processing converts the physical energy arising from the scene signals to a neural code. At this level, it is not clear to what extent the computations are specific to scene analysis, but there are animals with sensory organs that are adapted to facilitate scene analysis at the periphery, e.g., the estimation of depth from defocus in the jumping spider using a multi-layer retina (
<xref rid="B113" ref-type="bibr">Nagata et al., 2012</xref>
) or the specialization of cell types in the retina (
<xref rid="B49" ref-type="bibr">Field and Chichilnisky, 2007</xref>
;
<xref rid="B61" ref-type="bibr">Gollisch and Meister, 2010</xref>
;
<xref rid="B98" ref-type="bibr">Masland, 2012</xref>
). We use a single arrow out of the signal coding level to indicate that the information flow is largely in one direction, although it is conceivable that in some cases, feedback to the periphery could play a role in scene analysis by efferent control of early sensory processing or transduction (e.g., olivocochlear feedback in the auditory system). Importantly, coding is just an initial step in acquiring data about the environment – it does not make explicit the properties of interest in the scene. For that, further processing is needed.</p>
<sec>
<title>Inference and prior knowledge</title>
<p>The recovery of information about the scene is an ill-posed inference problem that requires some degree of prior knowledge of scene structure. The level of
<italic>intermediate features</italic>
is the first stage where scene components and parameters are disentangled. Properties such as the contour of targets that blend in with their backgrounds, the slant and shape of terrain, or the parts of objects missing due to occlusion are not made explicit at the level signal coding, they must be
<italic>inferred</italic>
.</p>
<p>Although low-level features such as oriented edge detectors can signal boundaries of surfaces having different luminance values, they do not reliably signal boundaries of complex surface textures, such as the boundary between a tree trunk and background elements in a scene (
<xref rid="B76" ref-type="bibr">Karklin and Lewicki, 2009</xref>
). Similarly, retinal disparity is an unreliable cue for depth because computing the binocular correspondence for complex surfaces is often confounded by false matches or missing correspondences, and this is further compounded by multiple objects, complex surface shapes, and scenes with multiple occlusions. Analogous challenges are faced for determining spatial location of sound sources. Inter-aural time and intensity differences or the time of arrival of echoes can provide information about the direction or distance of an isolated sound source, but these cues are also compromised in more complex acoustic environments, which may have multiple sources, reverberation, and significant levels of background noise.</p>
<p>As discussed above, solving these types of ill-posed inference problems ultimately depends on higher-level knowledge, but intermediate-level features provide a staging ground at which perceptual units can be organized, e.g., features originating from different sound sources or the formation of distinct representations of visual surfaces (
<xref rid="B7" ref-type="bibr">Barrow and Tenenbaum, 1978</xref>
). In vision, this stage partly corresponds to what is loosely referred to as perceptual organization, segmentation, or grouping. These terms, however, are more often used to describe the perceptual processing of simple 2D or “idealized” stimuli and rarely get at the problem of how elements of the 3D scene, such as objects and surfaces, are extracted from sensory inputs. The complexity of processing at this stage is closer to the kinds of problems that have been investigated in computer vision, such as shape-from-shading or structure-from-motion, which estimate the 3D structure of a scene or objects. To date, however, these problems have no general solutions for natural scenes.</p>
<p>For simpler aspects of scene analysis, it might be possible to go directly from an intermediate level representation to an action or response (as depicted by the arrow). For more complex tasks, however, feedback in the form of higher-level knowledge is required because the separation of components and inference of scene parameters that occurs at this stage is a highly ill-posed problem in which a multitude of interpretations could be consistent with the sensory features. For example, in the computation of binocular disparity, high-level knowledge of typical 3D structures in the scene makes it possible to arrive at an unambiguous interpretation of local depth cues (
<bold>Figure
<xref ref-type="fig" rid="F6">6</xref>
</bold>
, double arrows).</p>
<p>Higher-level representations can come in many forms, but for our purposes here we single out two general types:
<italic>object memory</italic>
and
<italic>spatial memory</italic>
. Note that we use the term “memory” here in a broader sense to mean implicit knowledge or a representation of object structure, which could be innate or acquired through experience. It also encompasses the computational inference and feedback discussed in the previous section. Object memory includes information such as object shape, e.g., the shape of a conspecific jumping spider or acoustic characteristics like song structure or the echo signature of a moth. Spatial memory combines input from multiple modalities (or different sources within a modality) to form a more accurate and robust representation of the scene layout, potential target locations, terrain, and obstacles, for example, the daily path taken by a bat to forage (
<xref rid="B150" ref-type="bibr">Tsoar et al., 2011</xref>
) or spatial structure at more proximal scales (
<xref rid="B149" ref-type="bibr">Tommasi et al., 2012</xref>
). The arrow between object and spatial memories indicates that these processes are not necessarily independent and may be mutually informative, e.g., certain targets occur only in certain locations. Note that the level of detail in these representations in these areas need only be sufficient to meet the behavioral requirements of the system. Furthermore, they need not have an isomorphic relationship to object category or 3D spatial structure but could encode information extracted from the scene and represented in a very reduced dimensionality.</p>
</sec>
<sec>
<title>Integrative representations of scene structure</title>
<p>An essential aspect of the higher-level target and spatial representations is that they are persistent and integrate information over time and across multiple actions. Note that this is not a literal “visual integrative buffer” or low-level visual memory (see
<xref rid="B161" ref-type="bibr">Wade and Tatler, 2005</xref>
for a review and critique of this idea). The integration of information is in the Bayesian sense of combining multiple sources of information to infer underlying structure. It is performed at a higher-level and pertains to the basic problem of the inference of scene properties. Without integration and consolidation, the system could only react to whatever sensory information is available at a given instant in time, which is often too ambiguous to drive action. By integrating sensory information over time, the system can build up a representation of the external environment that allows it to more reliably identify objects, more quickly locate targets, or more accurately estimate other aspects of the scene. The integration acts at multiple time scales that vary from relatively shorter – e.g., building up continuously from the movements of the sensors and dynamics of motor actions – to relatively longer, e.g., by building up a synthesized scene representation from sensory information acquired at different locations (
<xref rid="B83" ref-type="bibr">Land and Furneaux, 1997</xref>
;
<xref rid="B82" ref-type="bibr">Land, 1999</xref>
;
<xref rid="B146" ref-type="bibr">Tatler et al., 2005</xref>
;
<xref rid="B27" ref-type="bibr">Byrne et al., 2007</xref>
;
<xref rid="B46" ref-type="bibr">Epstein, 2008</xref>
;
<xref rid="B168" ref-type="bibr">Wolbers et al., 2008</xref>
,
<xref rid="B169" ref-type="bibr">2011</xref>
).</p>
<p>An example of such integration occurs when locating objects in cluttered environments, which is a basic problem in scene analysis that animals face when pursuing prey or finding mates. The scene usually contains many potential targets, each of which may be weak or ambiguous, so there is strong selective pressure to perform this task efficiently. Doing so requires at least two levels of non-trivial perceptual inference. The first is to accurately estimate the likelihood of the target location so that little time is spent on false targets. The second is to accurately integrate information over time and across actions, so that the new information obtained during prolonged vigilance and after a head or eye movement is updated with the old. This integration of information, past, present, and possibly across modality, contributes to an internal representation of target location in the larger scene. A computational model of such an integrative, inferential memory was developed by
<xref rid="B116" ref-type="bibr">Najemnik and Geisler (2005</xref>
,
<xref rid="B117" ref-type="bibr">2009</xref>
) for optimal visual search, in which a foveated visual system is used to search for a target in noise. Uncertainty of target location increases with eccentricity due to decrease in ganglion cell density, but each successive saccade provides additional information which the model integrates to compute a likelihood map of target location. In a real biological system, this type of map would not be 2D, but represent the 3D scene and factor feedback from eye, head, and body-movements, and potentially information obtained through other modalities such as the auditory soundscape and movement of the target.</p>
</sec>
<sec>
<title>Inference of 3D scene structure</title>
<p>Animals act in a 3D world, and the representations needed to guide actions such as navigation or visual search must encode many aspects of 3D scene and target structure. As discussed above, psychophysical research has shown that early in perceptual processing the representations used by humans take into account the 3D scene relationships preferentially over 2D patterns (
<xref rid="B119" ref-type="bibr">Nakayama et al., 1995</xref>
;
<xref rid="B91" ref-type="bibr">Lee and Spelke, 2010</xref>
). This has important implications for internal representations, because it implies that information about the scene is represented and processed in a format that encodes the 3D structure of the environment. This representation need not be exhaustive or detailed, and it need not form a “map” of the 3D scene. We would expect such representations to have complex properties, because they must appropriately factor in movements of the eyes, head, ears, and body, as well as motions of other objects in the environment (
<xref rid="B35" ref-type="bibr">Colby and Goldberg, 1999</xref>
;
<xref rid="B101" ref-type="bibr">Melcher, 2011</xref>
;
<xref rid="B147" ref-type="bibr">Tatler and Land, 2011</xref>
). These representations must also integrate information across multiple sensory modalities to form a common representation of the 3D environment that serves multiple behavioral goals such as foraging, pursuit of prey, communication, locomotion, and navigation.</p>
<p>A particularly challenging aspect of spatial representation is that it must remain coherent despite changes in the raw sensory information that occur due to self-motion or the motion of other components of the scene (
<xref rid="B27" ref-type="bibr">Byrne et al., 2007</xref>
;
<xref rid="B102" ref-type="bibr">Melcher and Colby, 2008</xref>
;
<xref rid="B101" ref-type="bibr">Melcher, 2011</xref>
). The problem the animal faces is that although many components of the scene are stable, such as the shape of the terrain or the positions of obstacles, the sensory input rarely is because the animal and its sensory organs move. Thus, the animal’s actions and behavior must be based on the properties of the scene and not the fluctuating sensory information. The problem is further complicated for dynamic aspects of the scene, such as moving targets, because the representation must also predict trajectories, although this can also provide information that make the target standout from the background.</p>
<p>The nature of the representation of spatial structure (e.g., whether it is referenced to the scene, the body, or past motor actions) remains an active area of research (
<xref rid="B26" ref-type="bibr">Burgess, 2008</xref>
;
<xref rid="B147" ref-type="bibr">Tatler and Land, 2011</xref>
), and it is not clear how many distinct forms of spatial structure are necessary to subserve scene analysis tasks such as target search, pursuit, locomotion, or path planning. However, the general function of higher-level representations is to transform and integrate the lower-level information and feedback from sensorimotor action representations to form a consistent and cumulative representation of the external scene which can then drive behavior. Even in tasks such as auditory scene analysis, the spatial locations of the sound sources can play a role in helping to identify and extract them (
<xref rid="B138" ref-type="bibr">Shinn-Cunningham et al., 2007</xref>
;
<xref rid="B33" ref-type="bibr">Chiu et al., 2009</xref>
). In this sense, the object and spatial memories can be mutually informative – as representations of one type become more fully formed, they help inform the other (as indicated by the double arrow).</p>
</sec>
<sec>
<title>Actively driven perception</title>
<p>Animals actively probe their environment and take actions based on both their current sensory input as well as on the accumulated information acquired from past actions. This means that the sensory input can change dramatically from instant to instant with the animal’s actions. Perceptual continuity or coherence relies on integration of the new sensory information with the internal representation maintained by the system. Actions could be as simple as a head turn to disambiguate the location of a sound or as complex as a sequence of eye movements during visual search. The choice of action must be carefully selected to rapidly and reliably acquire scene information and progress toward the behavioral goal.</p>
<p>The
<italic>behavioral state</italic>
node occupies the deepest level in the scene analysis framework (
<bold>Figure
<xref ref-type="fig" rid="F6">6</xref>
</bold>
), and sits intermediate between the sensory and motor areas. On the sensory side, this area coordinates the perceptual processing necessary to achieve specific behavioral goals. There are more direct sensory-motor interactions at lower levels, but a higher-level representation of the behavioral state is needed because the current behavioral goal affects both how the sensory input is processed and the appropriate action to take in response, as well as the sensory inputs to follow. For example, the information that must be extracted from the scene during foraging is very different from that used in mate selection. We use the term state in a broad sense to represent an animal’s current mode of behavior. This could be directed toward a specific goal (e.g., foraging for food, pursuit of a target, mate selection, etc.), and it could also represent intermediate states of the system while it progresses toward a goal, such as speed of locomotion, planning target interception, and so on. The behavioral state must also represent information related to progression toward the goal. For example, in target pursuit, the goal represents the relative spatial position or predicted path of the target; or during foraging, information about the values associated with potential food targets; in mate selection, a wide range of fitness signals must be integrated to drive courtship behavior.</p>
<p>The top-down feedback influences or control the types of signals extracted from the scene in both the target and spatial memories. During visual search, information about the target’s likely form and spatial location is transmitted to lower areas and help locate it more efficiently in a scene. In auditory scene analysis, whether a subject attends to a voice, the music in the background, or the sound of something moving across the floor, all depend on the current task, which determines what kinds of acoustic information are extracted from auditory input. Mate selection is another example, where highly specific information, often across multiple modalities, need to be derived from the scene. These examples imply that either there are multiple parallel circuits in the system specialized for specific tasks, or that the neural circuits are more generic, but highly reconfigurable so that they adapt to a wide range of tasks.</p>
<p>In addition to coordinating the sensory side, the behavioral state also drives action. The action of the system is an integral part of scene analysis behavior, and understanding the resultant motor actions has proven crucial, for example, in echolocating bats, for understanding the sensory representations and computations. In the framework, this is depicted by the path and motor planning node and an additional lower-level node for specific motor actions, responses, and locomotion. Like for sensory processing, the behavioral state also influences the nature of the sensory-motor interaction at lower levels, and these have distinct neural substrates in the form of parallel circuits or a more general circuit with top-down input.</p>
<p>There is a broad range of motor actions that can aid scene analysis. On the shortest time scale,
<italic>compensatory actions</italic>
facilitate scene analysis by stabilizing the projection of the scene onto the sensor, such as smooth pursuit eye movements or the head bobbing reflex in pigeons. Tracking actions represent movements involved in pursuit or locomotion and are also coordinated dynamically with the ongoing sensory input. These actions are driven directly by representations at lower levels, and can be further guided or modulated using information from the behavioral state, feedback or efference copy.
<italic>Probing actions</italic>
are the most interesting from the viewpoint of scene analysis because they play a crucial role in solving otherwise insoluble problems. Accurately directed head or eye movements during visual search are one type of action already discussed, which actively probe the scene to efficiently locate the target. Other examples include head and pinnae movements used to disambiguate sound source location or haptic probing with hands or whiskers. Animals that rely on active sensing, e.g., echolocating bats and cetaceans, as well as electrolocating fish, adjust the signals they produce as they probe the environment. Probing actions are also used to aid object discrimination and identification. An actively studied question is to what extent probing actions are ideal in the sense of providing the most information about an object and whether information gathered across multiple movements is integrated to form more accurate spatial representations or increased resolution. More generically, actions are initiated in response to an inference or decision, such as whether an animal is close enough to strike at a target, and advance the animal toward its behavioral goals.</p>
</sec>
</sec>
</sec>
<sec sec-type="conclusions">
<title>CONCLUSION</title>
<p>We have presented a framework that attempts to encompass the set of scene analysis problems that are relevant to a wide range of animals, including humans. While most of our classical notions of scene analysis come from studying aspects of human behavior, such as auditory scene segmentation and streaming (
<xref rid="B21" ref-type="bibr">Bregman, 1990</xref>
) or perceptual organization in vision (
<xref rid="B130" ref-type="bibr">Palmer, 1999</xref>
), it is clear from the perspectives presented above that scene analysis covers a much broader range of problems. Furthermore, it forces us to go beyond the laboratory setting and grapple with the issue of how animals and humans process the wide variety of complex, natural stimuli in their natural habitats. The diversity of animal systems and their natural environments provides a wealth of examples from which the most appropriate models can be selected to address specific issues in natural scene analysis.</p>
<p>We selected four animal examples that highlight these different aspects of scene analysis, but there are many other animals and behaviors that also illustrate these principles. For each of the proposed properties discussed above, one can ask to what extent does a given animal require this property for scene analysis? For example, do electric fish use higher level structural knowledge to recognize objects? To what extent do songbirds integrate sounds across time into auditory streams? Answering questions like these will require the development of more sophisticated computational models, a better characterization of the sensory signals in natural scenes, and more detailed studies of animal perception and action in their ecological niche.</p>
<p>A given animal’s perceptual strategy will lie at some point along a continuum between simple signal detection and general purpose scene analysis, and understanding where this point is requires characterizing the limits of an animal’s abilities under a range of task difficulties. For example, a jumping spider detecting a fly against a uniform or blurred background is simpler than detecting it against a complex surface. For a particular task, we might expect that an animal has evolved solutions that approach that of an ideal observer, given the physical constraints and task demands of the system. At some point, however, the difficulty of the task will exceed limits of the system, e.g., how accurately does the songbird recognize song with an increasing number of competing songs? Knowing these limits will inform us about the extent to which an animal performs scene analysis and could provide important insights into how it is carried out. Fundamentally, perceptual performance is constrained by the underlying computations. One of the goals of this paper is to promote computationally guided experimental investigations that will help reveal the underlying scene analysis processes used in different animal systems.</p>
<p>For most animals, and especially human observers, we do not have good computational models for solving scene analysis tasks. We do not know, for example, how to identify objects against a complex background or under occlusion. We have an incomplete understanding of the computations required for auditory scene segregation and recognition in complex acoustic environments. Echolocation and electroreception are even more mysterious. These are not just mysteries about specializations in biology, but highlight questions about the computational principles that enable scene analysis in any system, biological or machine. Although there continues to be progress and even success in restricted domains, these are still many unsolved problems. The difficulties increase when we consider scene analysis problems beyond pattern recognition. For example, what information about the 3D environment is needed to guide locomotion? How is this extracted from the raw sensory signals, and what are efficient ways of doing this? What are the principles that govern the perception–action loop? Research on these questions is still in its early stages, and the models that come out of these efforts will be important for advancing our understanding of the computational problems in scene analysis.</p>
<p>This underscores perhaps the most important point of this article: studies of animal systems, their behavior, environment, and limitations sheds light on
<italic>what scene analysis problems need to be solved</italic>
. Animals have evolved sensors and information processing systems that are optimized to carry out a repertoire of scene analysis tasks. We cannot directly observe how information is processed by the system because subserving any observable behavior is a myriad of sub-tasks working in concert. Models of those tasks constitute hypotheses about how information is processed in the system, and so the merit of a model is determined by the extent to which it explains and predicts aspects of animal behavior. Thus, uncovering the essential computations in scene analysis is a
<italic>scientific</italic>
process. This stands in contrast to engineering approaches where algorithm development is guided by performance on tasks that are well-defined, but often fail to capture the robustness and adaptability of animal systems. Furthermore, even robust and well-defined computational algorithms do not necessarily have ecological relevance. For example, auditory stream segregation is often defined with the goal of recovering the individual waveforms of the different sound sources, but this is not necessarily the problem animals need to solve. Comparisons between the computational models and biological systems are necessary to guide further development and provide a means to identify models that are the most relevant.</p>
<p>Our goal in this article is to expand the concept of scene analysis to consider how both humans and animals perceive and interact with their natural environment. In contrast to psychophysical approaches that focus on humans and carefully controlled stimuli, we emphasize the need to study how a wide range of animals deal with the complex sensory signals that arise from natural behavior in the real world. In contrast to engineering approaches to specific scene analysis problems such as object recognition or speech recognition, here we have emphasized the need for models that have potential ecological relevance and can guide experiments and inform the interpretation of data. Behavioral and physiological studies can only go so far without detailed computational models of the information processing. Recording an animal’s sensory environment and its actions is not sufficient to gain insight to the computations underlying its behavior because the range of environmental variables and behavioral repertoires is too large to be measured exhaustively. Models of the information processing guide us in how to pare down or prioritize the essential dimensions of this space. Our goal here has been to better define what information processing is needed to solve the scene analysis problems faced by both humans and animals. Viewing scene analysis from this broader perspective we believe holds the greatest promise for elucidating how it is solved throughout the animal kingdom.</p>
</sec>
<sec>
<title>AUTHOR CONTRIBUTIONS</title>
<p>All authors contributed equally to this work.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>This germination of this paper was made possible by the generous support of the Wissenschaftskolleg zu Berlin, which hosted all four authors during their sabbatical. We also thank Christopher DiMattina, Eizaburo Doi, Lakshmi Krishnan, Kevin O’Connor, John Rattcliffe, Jonathan Simon, and Christian Stilp for valuable feedback on the manuscript. This work was supported by NSF grants IIS-0705677, IIS-1111654, and ONR MURI N000140710747 (Michael S. Lewicki); NIH grant EY019965 and NSF grant IIS-0705939 (Bruno A. Olshausen); The Danish Council for Independent Research, Natural Sciences grant 272-08-0386 and grant 09-062407 FNU (Annemarie Surlykke); NIH grants MH56366 and EB004750, and NSF grant 1010193 (Cynthia F. Moss).</p>
</ack>
<ref-list>
<title>REFERENCES</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amrhein</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Kunc</surname>
<given-names>H. P.</given-names>
</name>
<name>
<surname>Naguib</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Non-territorial nightingales prospect territories during the dawn chorus.</article-title>
<source>
<italic>Proc. R. Soc. Lond. B Biol. Sci.</italic>
271(Suppl.</source>
<volume>4)</volume>
<fpage>S167</fpage>
<lpage>S169</lpage>
<pub-id pub-id-type="doi">10.1098/rsbl.2003.0133</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Appeltants</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Gentner</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Hulse</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Balthazart</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ball</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>The effect of auditory distractors on song discrimination in male canaries (
<italic>Serinus canaria</italic>
).</article-title>
<source>
<italic>Behav. Process.</italic>
</source>
<volume>69</volume>
<fpage>331</fpage>
<lpage>341</lpage>
<pub-id pub-id-type="doi">10.1016/j.beproc.2005.01.010</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Aubin</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Jouventin</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Cocktail-party effect in king penguin colonies.</article-title>
<source>
<italic>Proc. R. Soc. Lond. B Biol. Sci.</italic>
</source>
<volume>265</volume>
<fpage>1665</fpage>
<lpage>1673</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.1998.0486</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bacelo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Engelmann</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hollmann</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>von der Emde</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Grant</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Functional foveae in an electrosensory system.</article-title>
<source>
<italic>J. Comp. Neurol.</italic>
</source>
<volume>511</volume>
<fpage>342</fpage>
<lpage>359</lpage>
<pub-id pub-id-type="doi">10.1002/cne.21843</pub-id>
<pub-id pub-id-type="pmid">18803238</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ball</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Hulse</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Birdsong.</article-title>
<source>
<italic>Am. Psychol.</italic>
</source>
<volume>53</volume>
<fpage>37</fpage>
<lpage>58</lpage>
<pub-id pub-id-type="doi">10.1037/0003-066X.53.1.37</pub-id>
<pub-id pub-id-type="pmid">9442582</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ballard</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Animate vision.</article-title>
<source>
<italic>Artif. Intell.</italic>
</source>
<volume>48</volume>
<fpage>57</fpage>
<lpage>86</lpage>
<pub-id pub-id-type="doi">10.1016/0004-3702(91)90080-4</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barrow</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1978</year>
).
<article-title>Recovering intrinsic scene characteristics from images.</article-title>
<source>
<italic>Comput. Vis. Syst.</italic>
</source>
<volume>3</volume>
<issue>26</issue>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bates</surname>
<given-names>M. E.</given-names>
</name>
<name>
<surname>Simmons</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Zorikov</surname>
<given-names>T. V.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Bats use echo harmonic structure to distinguish their targets from background clutter.</article-title>
<source>
<italic>Science</italic>
</source>
<volume>333</volume>
<fpage>627</fpage>
<lpage>630</lpage>
<pub-id pub-id-type="doi">10.1126/science.1202065</pub-id>
<pub-id pub-id-type="pmid">21798949</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bates</surname>
<given-names>M. E.</given-names>
</name>
<name>
<surname>Stamper</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Simmons</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Jamming avoidance response of big brown bats in target detection.</article-title>
<source>
<italic>J. Exp. Biol.</italic>
</source>
<volume>211</volume>
<fpage>106</fpage>
<lpage>113</lpage>
<pub-id pub-id-type="doi">10.1242/jeb.009688</pub-id>
<pub-id pub-id-type="pmid">18083738</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bee</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Klump</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Primitive auditory stream segregation: a neurophysiological study in the songbird forebrain.</article-title>
<source>
<italic>J. Neurophysiol.</italic>
</source>
<volume>92</volume>
<fpage>1088</fpage>
<lpage>1104</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00884.2003</pub-id>
<pub-id pub-id-type="pmid">15044521</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bee</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>“Individual recognition in animal species,” in</article-title>
<source>
<italic>The Encyclopedia of Language and Linguistics</italic>
, Vol. 2</source>
<role>ed.</role>
<person-group person-group-type="editor">
<name>
<surname>Naguib</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<publisher-loc>London</publisher-loc>
:
<publisher-name>Elsevier</publisher-name>
)
<fpage>617</fpage>
<lpage>626</lpage>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bee</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Micheyl</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>The cocktail party problem: what is it? How can it be solved? And why should animal behaviorists study it?</article-title>
<source>
<italic>J. Comp. Psychol.</italic>
</source>
<volume>122</volume>
<fpage>235</fpage>
<lpage>251</lpage>
<pub-id pub-id-type="doi">10.1037/0735-7036.122.3.235</pub-id>
<pub-id pub-id-type="pmid">18729652</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bell</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sejnowski</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>An information maximization approach to blind separation and blind deconvolution.</article-title>
<source>
<italic>Neural Comput.</italic>
</source>
<volume>7</volume>
<fpage>1129</fpage>
<lpage>1159</lpage>
<pub-id pub-id-type="doi">10.1162/neco.1995.7.6.1129</pub-id>
<pub-id pub-id-type="pmid">7584893</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Benney</surname>
<given-names>K. S.</given-names>
</name>
<name>
<surname>Braaten</surname>
<given-names>R. F.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Auditory scene analysis in estrildid finches (
<italic>Taeniopygia guttata</italic>
and
<italic>Lonchura striata domestica</italic>
): a species advantage for detection of conspecific song.</article-title>
<source>
<italic>J. Comp. Psychol.</italic>
</source>
<volume>114</volume>
<fpage>174</fpage>
<lpage>182</lpage>
<pub-id pub-id-type="doi">10.1037/0735-7036.114.2.174</pub-id>
<pub-id pub-id-type="pmid">10890589</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bergen</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Julesz</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>Parallel versus serial processing in rapid pattern discrimination.</article-title>
<source>
<italic>Nature</italic>
</source>
<volume>303</volume>
<fpage>696</fpage>
<lpage>698</lpage>
<pub-id pub-id-type="doi">10.1038/303696a0</pub-id>
<pub-id pub-id-type="pmid">6855915</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Biederman</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>Recognition-by-components: a theory of human image understanding.</article-title>
<source>
<italic>Psychol. Rev.</italic>
</source>
<volume>94</volume>
<fpage>115</fpage>
<lpage>147</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.94.2.115</pub-id>
<pub-id pub-id-type="pmid">3575582</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blanz</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Vetter</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Face recognition based on fitting a 3D morphable model.</article-title>
<source>
<italic>IEEE Trans. Pattern Anal. Mach. Intell.</italic>
</source>
<volume>25</volume>
<fpage>1063</fpage>
<lpage>1074</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2003.1227983</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Blauert</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>
<italic>Spatial Hearing</italic>
.</article-title>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Braaten</surname>
<given-names>R. F.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Multiple levels of representation of song by European starlings (
<italic>Sturnus vulgaris</italic>
): open-ended categorization of starling song types and differential forgetting of song categories and exemplars.</article-title>
<source>
<italic>J. Comp. Psychol.</italic>
</source>
<volume>114</volume>
<fpage>61</fpage>
<lpage>72</lpage>
<pub-id pub-id-type="doi">10.1037/0735-7036.114.1.61</pub-id>
<pub-id pub-id-type="pmid">10739312</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Braaten</surname>
<given-names>R. F.</given-names>
</name>
<name>
<surname>Leary</surname>
<given-names>J. C.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Temporal induction of missing birdsong segments in European starlings.</article-title>
<source>
<italic>Psychol. Sci.</italic>
</source>
<volume>10</volume>
<fpage>162</fpage>
<lpage>166</lpage>
<pub-id pub-id-type="doi">10.1111/1467-9280.00125</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bregman</surname>
<given-names>A. S.</given-names>
</name>
</person-group>
(
<year>1990</year>
).
<article-title>
<italic>Auditory Scene Analysis: The Perceptual Organization of Sound</italic>
.</article-title>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brémond</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>Specific recognition in song of Bonelli’s warbler (
<italic>Phylloscopus bonelli</italic>
).</article-title>
<source>
<italic>Behaviour</italic>
</source>
<volume>58</volume>
<fpage>99</fpage>
<lpage>116</lpage>
<pub-id pub-id-type="doi">10.1163/156853976X00253</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brémond</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1978</year>
).
<article-title>Acoustic competition between song of wren (
<italic>Troglodytes troglodytes</italic>
) and songs of other species.</article-title>
<source>
<italic>Behaviour</italic>
</source>
<volume>65</volume>
<fpage>89</fpage>
<lpage>98</lpage>
<pub-id pub-id-type="doi">10.1163/156853978X00549</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brumm</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Signalling through acoustic windows: nightingales avoid interspecific competition by short-term adjustment of song timing.</article-title>
<source>
<italic>J. Comp. Physiol. A</italic>
</source>
<volume>192</volume>
<fpage>1279</fpage>
<lpage>1285</lpage>
<pub-id pub-id-type="doi">10.1007/s00359-006-0158-x</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brumm</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Naguib</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Environmental acoustics and the evolution of bird song.</article-title>
<source>
<italic>Adv. Study Behav.</italic>
</source>
<volume>40</volume>
<fpage>1</fpage>
<lpage>33</lpage>
<pub-id pub-id-type="doi">10.1016/S0065-3454(09)40001-9</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burgess</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Spatial cognition and the brain.</article-title>
<source>
<italic>Ann. N. Y. Acad. Sci.</italic>
</source>
<volume>1124</volume>
<fpage>77</fpage>
<lpage>97</lpage>
<pub-id pub-id-type="doi">10.1196/annals.1440.002</pub-id>
<pub-id pub-id-type="pmid">18400925</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Byrne</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Becker</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Burgess</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Remembering the past and imagining the future: a neural model of spatial memory and imagery.</article-title>
<source>
<italic>Psychol. Rev.</italic>
</source>
<volume>114</volume>
<fpage>340</fpage>
<lpage>375</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.114.2.340</pub-id>
<pub-id pub-id-type="pmid">17500630</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carlson</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Electric signaling behavior and the mechanisms of electric organ discharge production in mormyrid fish.</article-title>
<source>
<italic>J. Physiol. Paris</italic>
</source>
<volume>96</volume>
<fpage>405</fpage>
<lpage>419</lpage>
<pub-id pub-id-type="doi">10.1016/S0928-4257(03)00019-6</pub-id>
<pub-id pub-id-type="pmid">14692489</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carlson</surname>
<given-names>B. A.</given-names>
</name>
<name>
<surname>Hopkins</surname>
<given-names>C. D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Stereotyped temporal patterns in electrical communication.</article-title>
<source>
<italic>Anim. Behav.</italic>
</source>
<volume>68</volume>
<fpage>867</fpage>
<lpage>878</lpage>
<pub-id pub-id-type="doi">10.1016/j.anbehav.2003.10.031</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cashman</surname>
<given-names>T. J.</given-names>
</name>
<name>
<surname>Fitzgibbon</surname>
<given-names>A. W.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>What shape are dolphins? Building 3D morphable models from 2D Images.</article-title>
<source>
<italic>IEEE Trans. Pattern Anal. Mach. Intell</italic>
.</source>
<volume>35</volume>
<fpage>232</fpage>
<lpage>244</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2012.68</pub-id>
<pub-id pub-id-type="pmid">22392707</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Catchpole</surname>
<given-names>C. K</given-names>
</name>
<name>
<surname>Slater</surname>
<given-names>P. J. B.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>
<italic>Bird Song: Biological Themes and Variations</italic>
.</article-title>
<publisher-loc>Cambridge</publisher-loc>
:
<publisher-name>Cambridge University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cherry</surname>
<given-names>E. C.</given-names>
</name>
</person-group>
(
<year>1953</year>
).
<article-title>Some experiments on the recognition of speech, with one and with two ears.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>25</volume>
<fpage>975</fpage>
<lpage>979</lpage>
<pub-id pub-id-type="doi">10.1121/1.1907229</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chiu</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Xian</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Moss</surname>
<given-names>C. F.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Adaptive echolocation behavior in bats for the analysis of auditory scenes.</article-title>
<source>
<italic>J. Exp. Biol.</italic>
</source>
<volume>212</volume>
<fpage>1392</fpage>
<lpage>1404</lpage>
<pub-id pub-id-type="doi">10.1242/jeb.027045</pub-id>
<pub-id pub-id-type="pmid">19376960</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Clark</surname>
<given-names>D. L.</given-names>
</name>
<name>
<surname>Uetz</surname>
<given-names>G. W.</given-names>
</name>
</person-group>
(
<year>1990</year>
).
<article-title>Video image recognition by the jumping spider,
<italic>Maevia inclemens</italic>
(Araneae: Salticidae).</article-title>
<source>
<italic>Anim. Behav.</italic>
</source>
<volume>40</volume>
<fpage>884</fpage>
<lpage>890</lpage>
<pub-id pub-id-type="doi">10.1016/S0003-3472(05)80990-X%</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Colby</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Goldberg</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Space and attention in parietal cortex.</article-title>
<source>
<italic>Annu. Rev. Neurosci.</italic>
</source>
<volume>22</volume>
<fpage>319</fpage>
<lpage>349</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.neuro.22.1.319%</pub-id>
<pub-id pub-id-type="pmid">10202542</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cooke</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>A glimpsing model of speech perception in noise.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>119</volume>
<fpage>1562</fpage>
<lpage>1573</lpage>
<pub-id pub-id-type="doi">10.1121/1.2166600</pub-id>
<pub-id pub-id-type="pmid">16583901</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cooke</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hershey</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Rennie</surname>
<given-names>S. J.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Monaural speech separation and recognition challenge.</article-title>
<source>
<italic>Comput. Speech Lang.</italic>
</source>
<volume>24</volume>
<fpage>1</fpage>
<lpage>15</lpage>
<pub-id pub-id-type="doi">10.1016/j.csl.2009.02.006</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cutting</surname>
<given-names>J. E.</given-names>
</name>
<name>
<surname>Vishton</surname>
<given-names>P. M.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Perceiving layout and knowing distances: the integration, relative potency, and contextual use of different information about depth.</article-title>
<source>
<italic>Percept. Space Motion</italic>
</source>
<volume>5</volume>
<fpage>69</fpage>
<lpage>117</lpage>
<pub-id pub-id-type="doi">10.1016/B978-012240530-3/50005-5</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Darwin</surname>
<given-names>C. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Listening to speech in the presence of other sounds.</article-title>
<source>
<italic>Philos. Trans. R. Soc. Lond. B Biol. Sci.</italic>
</source>
<volume>363</volume>
<fpage>1011</fpage>
<lpage>1021</lpage>
<pub-id pub-id-type="doi">10.1098/rstb.2007.2156</pub-id>
<pub-id pub-id-type="pmid">17827106</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Davis</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Biddulph</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Balashek</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1952</year>
).
<article-title>Automatic recognition of spoken digits.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>24</volume>
<fpage>637</fpage>
<lpage>642</lpage>
<pub-id pub-id-type="doi">10.1121/1.1906946</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dent</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Dooling</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The precedence effect in three species of birds (
<italic>Melopsittacus undulatus</italic>
,
<italic>Serinus canaria</italic>
, and
<italic>Taeniopygia guttata</italic>
).</article-title>
<source>
<italic>J. Comp. Psychol.</italic>
</source>
<volume>118</volume>
<fpage>325</fpage>
<lpage>331</lpage>
<pub-id pub-id-type="doi">10.1037/0735-7036.118.3.325</pub-id>
<pub-id pub-id-type="pmid">15482060</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dent</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>McClaine</surname>
<given-names>E. M.</given-names>
</name>
<name>
<surname>Best</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Ozmeral</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Narayan</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Gallun</surname>
<given-names>F. J.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2009</year>
).
<article-title>Spatial unmasking of birdsong in zebra finches (
<italic>Taeniopygia guttata</italic>
) and budgerigars (
<italic>Melopsittacus undulatus</italic>
).</article-title>
<source>
<italic>J. Comp. Psychol.</italic>
</source>
<volume>123</volume>
<fpage>357</fpage>
<lpage>367</lpage>
<pub-id pub-id-type="doi">10.1037/a0016898</pub-id>
<pub-id pub-id-type="pmid">19929104</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Drees</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>1952</year>
).
<article-title>Untersuchungen über die angeborenen Verhaltensweisen bei Springspinnen (Salticidae).</article-title>
<source>
<italic>Z. Tierpsychol.</italic>
</source>
<volume>9</volume>
<fpage>169</fpage>
<lpage>207</lpage>
<pub-id pub-id-type="doi">10.1111/j.1439-0310.1952.tb01849.x</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Edelman</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>
<italic>Representation and Recognition in Vision</italic>
.</article-title>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Elder</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Goldberg</surname>
<given-names>R. M.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Ecological statistics of Gestalt laws for the perceptual organization of contours.</article-title>
<source>
<italic>J. Vis.</italic>
</source>
<volume>2</volume>
<fpage>324</fpage>
<lpage>353</lpage>
<pub-id pub-id-type="doi">10.1167/2.4.5</pub-id>
<pub-id pub-id-type="pmid">12678582</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Epstein</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Parahippocampal and retrosplenial contributions to human spatial navigation.</article-title>
<source>
<italic>Trends Cogn. Sci.</italic>
</source>
<volume>12</volume>
<fpage>388</fpage>
<lpage>396</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2008.07.004</pub-id>
<pub-id pub-id-type="pmid">18760955</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Falk</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Aytekin</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Moss</surname>
<given-names>C. F.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Adaptive behavior for texture discrimination by the free-flying big brown bat, Eptesicus fuscus.</article-title>
<source>
<italic>J. Comp. Physiol. A</italic>
</source>
<volume>197</volume>
<fpage>491</fpage>
<lpage>503</lpage>
<pub-id pub-id-type="doi">10.1007/s00359-010-0621-6</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Feng</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Schul</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>“Sound processing in real-world environments,” in</article-title>
<source>
<italic>Hearing and Sound Communication in Amphibians</italic>
</source>
<role>eds</role>
<person-group person-group-type="editor">
<name>
<surname>Narins</surname>
<given-names>P. M.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Fay</surname>
<given-names>R. R.</given-names>
</name>
<name>
<surname>Popper</surname>
<given-names>A. N.</given-names>
</name>
</person-group>
(
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Springer</publisher-name>
)
<fpage>323</fpage>
<lpage>350</lpage>
<pub-id pub-id-type="doi">10.1007/978-0-387-47796-1_11</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Field</surname>
<given-names>G. D.</given-names>
</name>
<name>
<surname>Chichilnisky</surname>
<given-names>E. J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Information processing in the primate retina: circuitry and coding.</article-title>
<source>
<italic>Annu. Rev. Neurosci.</italic>
</source>
<volume>30</volume>
<fpage>1</fpage>
<lpage>30</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.neuro.30.051606.094252</pub-id>
<pub-id pub-id-type="pmid">17335403</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Frisby</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Stone</surname>
<given-names>J. V.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>
<italic>Seeing</italic>
, 2nd Edn.</article-title>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>The MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gao</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Balakrishnan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>He</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Yan</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Müller</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Ear deformations give bats a physical mechanism for fast adaptation of ultrasonic beam patterns.</article-title>
<source>
<italic>Phys. Rev. Lett.</italic>
</source>
<volume>107</volume>
<issue>214301</issue>
<pub-id pub-id-type="doi">10.1103/PhysRevLett.107.214301</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Geipel</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Jung</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kalko</surname>
<given-names>E. K. V.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Perception of silent and motionless prey on vegetation by echolocation in the gleaning bat
<italic>Micronycteris microtis</italic>
.</article-title>
<source>
<italic>Proc. R. Soc. B</italic>
</source>
<volume>280</volume>
:
<issue>20122830</issue>
<pub-id pub-id-type="doi">10.1098/rspb.2012.2830</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Geisler</surname>
<given-names>W. S.</given-names>
</name>
<name>
<surname>Perry</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Super</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Gallogly</surname>
<given-names>D. P.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Edge co-occurrence in natural images predicts contour grouping performance.</article-title>
<source>
<italic>Vision Res.</italic>
</source>
<volume>41</volume>
<fpage>711</fpage>
<lpage>724</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(00)00277-7</pub-id>
<pub-id pub-id-type="pmid">11248261</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gerhardt</surname>
<given-names>H. C.</given-names>
</name>
<name>
<surname>Bee</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>“Recognition and localization of acoustic signals,” in</article-title>
<source>
<italic>Hearing and Sound Communication in Amphibians</italic>
</source>
<role>eds</role>
<person-group person-group-type="editor">
<name>
<surname>Narins</surname>
<given-names>P. M.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Fay</surname>
<given-names>R. R.</given-names>
</name>
<name>
<surname>Popper</surname>
<given-names>A. N.</given-names>
</name>
</person-group>
(
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Springer</publisher-name>
)
<fpage>113</fpage>
<lpage>146</lpage>
<pub-id pub-id-type="doi">10.1007/978-0-387-47796-1_5</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ghose</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Moss</surname>
<given-names>C. F.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The sonar beam pattern of a flying bat as it tracks tethered insects.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>114</volume>
<fpage>1120</fpage>
<lpage>1131</lpage>
<pub-id pub-id-type="doi">10.1121/1.1589754</pub-id>
<pub-id pub-id-type="pmid">12942989</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>1950</year>
).
<article-title>
<italic>The Perception of the Visual World</italic>
.</article-title>
<publisher-loc>Boston</publisher-loc>
:
<publisher-name>Houghton Mifflin</publisher-name>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>1958</year>
).
<article-title>Visually controlled locomotion and visual orientation in animals.</article-title>
<source>
<italic>Br. J. Psychol.</italic>
</source>
<volume>49</volume>
<fpage>182</fpage>
<lpage>194</lpage>
<pub-id pub-id-type="doi">10.1111/j.2044-8295.1958.tb00656.x</pub-id>
<pub-id pub-id-type="pmid">13572790</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>1966</year>
).
<article-title>
<italic>The Senses Considered as Perceptual Systems</italic>
.</article-title>
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Praeger</publisher-name>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>1979</year>
).
<article-title>
<italic>The Ecological Approach to Visual Perception</italic>
.</article-title>
<publisher-loc>Boston</publisher-loc>
:
<publisher-name>Houghton Mifflin</publisher-name>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gold</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Morgan</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Ellis</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<source>
<italic>Speech and Audio Signal Processing</italic>
.</source>
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Wiley-Interscience</publisher-name>
<pub-id pub-id-type="doi">10.1002/9781118142882</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gollisch</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Meister</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Eye smarter than scientists believed: neural computations in circuits of the retina.</article-title>
<source>
<italic>Neuron</italic>
</source>
<volume>65</volume>
<fpage>150</fpage>
<lpage>164</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2009.12.009</pub-id>
<pub-id pub-id-type="pmid">20152123</pub-id>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gould</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Baumstarck</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Quigley</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ng</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Koller</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>“Integrating visual and range data for robotic object detection,” in</article-title>
<source>
<italic>Proceedings of the European Conference on Computer Vision (ECCV) Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications</italic>
</source>
<publisher-loc>Marseille, France</publisher-loc>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Harris</surname>
<given-names>L. R</given-names>
</name>
<name>
<surname>Jenkin</surname>
<given-names>M. R. M.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<source>
<italic>Vision in 3D Environments</italic>
.</source>
<publisher-loc>Cambridge</publisher-loc>
:
<publisher-name>Cambridge University Press</publisher-name>
<pub-id pub-id-type="doi">10.1017/CBO9780511736261</pub-id>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hartley</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Zisserman</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<source>
<italic>Multiple View Geometry in Computer Vision.</italic>
</source>
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Cambridge University Press</publisher-name>
<pub-id pub-id-type="doi">10.1017/CBO9780511811685</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Heiligenberg</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>1977</year>
).
<article-title>
<italic>Principles of Electrolocation and Jamming Avoidance in Electric Fish: A Neuroethological Approach</italic>
.</article-title>
<publisher-loc>Berlin</publisher-loc>
:
<publisher-name>Springer-Verlag</publisher-name>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Henderson</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Hollingworth</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>High-level scene perception.</article-title>
<source>
<italic>Annu. Rev. Psychol.</italic>
</source>
<volume>50</volume>
<fpage>243</fpage>
<lpage>271</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.psych.50.1.243</pub-id>
<pub-id pub-id-type="pmid">10074679</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hill</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1979</year>
).
<article-title>Orientation by jumping spiders of the genus
<italic>Phidippus</italic>
(Araneae, Salticidae) during the pursuit of prey.</article-title>
<source>
<italic>Behav. Ecol. Sociobiol.</italic>
</source>
<volume>5</volume>
<fpage>301</fpage>
<lpage>322</lpage>
<pub-id pub-id-type="doi">10.1007/BF00293678</pub-id>
</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hiryu</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Bates</surname>
<given-names>M. E.</given-names>
</name>
<name>
<surname>Simmons</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Riquimaroux</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>FM echolocating bats shift frequencies to avoid broadcast-echo ambiguity in clutter.</article-title>
<source>
<italic>Proc. Nat. Acad. Sci.</italic>
<italic>U. S. A.</italic>
</source>
<volume>107</volume>
<fpage>7048</fpage>
<lpage>7053</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.1000429107</pub-id>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hoiem</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Savarese</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<source>
<italic>Representations and Techniques for 3D Object Recognition and Scene Interpretation</italic>
.</source>
<publisher-loc>San Rafael</publisher-loc>
:
<publisher-name>Morgan & Claypool</publisher-name>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hulse</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>MacDougall-Shackleton</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Wisniewski</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Auditory scene analysis by songbirds: stream segregation of birdsong by European starlings (
<italic>Sturnus vulgaris</italic>
).</article-title>
<source>
<italic>J. Comp. Psychol.</italic>
</source>
<volume>111</volume>
<fpage>3</fpage>
<lpage>13</lpage>
<pub-id pub-id-type="doi">10.1037/0735-7036.111.1.3</pub-id>
<pub-id pub-id-type="pmid">9090135</pub-id>
</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hulse</surname>
<given-names>S. H.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Auditory scene analysis in animal communication.</article-title>
<source>
<italic>Adv. Study Behav.</italic>
</source>
<volume>31</volume>
<fpage>163</fpage>
<lpage>200</lpage>
<pub-id pub-id-type="doi">10.1016/S0065-3454(02)80008-0</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hyvarinen</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Karhunen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Oja</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>
<italic>Independent Component Analysis</italic>
.</article-title>
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Wiley-Interscience</publisher-name>
</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jackson</surname>
<given-names>R. R.</given-names>
</name>
<name>
<surname>Pollard</surname>
<given-names>S. D.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Predatory behavior of jumping spiders.</article-title>
<source>
<italic>Annu. Rev. Entomol.</italic>
</source>
<volume>41</volume>
<fpage>287</fpage>
<lpage>308</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.en.41.010196.001443</pub-id>
<pub-id pub-id-type="pmid">15012331</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jakobsen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Ratcliffe</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Surlykke</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Convergent acoustic field of view in echolocating bats.</article-title>
<source>
<italic>Nature</italic>
</source>
<volume>493</volume>
<fpage>93</fpage>
<lpage>96</lpage>
<pub-id pub-id-type="doi">10.1038/nature11664</pub-id>
<pub-id pub-id-type="pmid">23172147</pub-id>
</mixed-citation>
</ref>
<ref id="B75">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jung</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kalko</surname>
<given-names>E. K. V</given-names>
</name>
<name>
<surname>von Helversen</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Echolocation calls in Central American emballonurid bats: signal design and call frequency alternation.</article-title>
<source>
<italic>J. Zool.</italic>
</source>
<volume>272</volume>
<fpage>125</fpage>
<lpage>137</lpage>
<pub-id pub-id-type="doi">10.1111/j.1469-7998.2006.00250.x</pub-id>
</mixed-citation>
</ref>
<ref id="B76">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Karklin</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Lewicki</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Emergence of complex cell properties by learning to generalize in natural scenes.</article-title>
<source>
<italic>Nature</italic>
</source>
<volume>457</volume>
<fpage>83</fpage>
<lpage>86</lpage>
<pub-id pub-id-type="doi">10.1038/nature07481</pub-id>
<pub-id pub-id-type="pmid">19020501</pub-id>
</mixed-citation>
</ref>
<ref id="B77">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kersten</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Object perception as Bayesian inference.</article-title>
<source>
<italic>Annu. Rev. Psychol.</italic>
</source>
<volume>55</volume>
<fpage>271</fpage>
<lpage>304</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.psych.55.090902.142005</pub-id>
<pub-id pub-id-type="pmid">14744217</pub-id>
</mixed-citation>
</ref>
<ref id="B78">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kersten</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Bayesian models of object perception.</article-title>
<source>
<italic>Curr. Opin. Neurobiol.</italic>
</source>
<volume>13</volume>
<fpage>150</fpage>
<lpage>158</lpage>
<pub-id pub-id-type="doi">10.1016/S0959-4388(03)00042-4</pub-id>
<pub-id pub-id-type="pmid">12744967</pub-id>
</mixed-citation>
</ref>
<ref id="B79">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Klump</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>“Sound localization in birds,” in</article-title>
<source>
<italic>Comparative Hearing: Birds and Reptiles</italic>
</source>
<role>eds</role>
<person-group person-group-type="editor">
<name>
<surname>Dooling</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Fay</surname>
<given-names>R. R.</given-names>
</name>
<name>
<surname>Popper</surname>
<given-names>A. N.</given-names>
</name>
</person-group>
(
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Springer</publisher-name>
)
<fpage>249</fpage>
<lpage>307</lpage>
<pub-id pub-id-type="doi">10.1007/978-1-4612-1182-2_6</pub-id>
</mixed-citation>
</ref>
<ref id="B80">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Klump</surname>
<given-names>G. M.</given-names>
</name>
<name>
<surname>Larsen</surname>
<given-names>O. N.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Azimuthal sound localization in the European starling (
<italic>Sturnus vulgaris</italic>
): 1. Physical binaural cues.</article-title>
<source>
<italic>J. Comp. Physiol. A</italic>
</source>
<volume>170</volume>
<fpage>243</fpage>
<lpage>251</lpage>
<pub-id pub-id-type="doi">10.1007/BF00196906</pub-id>
<pub-id pub-id-type="pmid">1583608</pub-id>
</mixed-citation>
</ref>
<ref id="B81">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Land</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Fields of view of the eyes of primitive jumping spiders.</article-title>
<source>
<italic>J. Exp. Biol.</italic>
</source>
<volume>119</volume>
<fpage>381</fpage>
<lpage>384</lpage>
</mixed-citation>
</ref>
<ref id="B82">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Land</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Motion and vision: why animals move their eyes.</article-title>
<source>
<italic>J. Comp. Physiol. A</italic>
</source>
<volume>185</volume>
<fpage>341</fpage>
<lpage>352</lpage>
<pub-id pub-id-type="doi">10.1007/s003590050393</pub-id>
<pub-id pub-id-type="pmid">10555268</pub-id>
</mixed-citation>
</ref>
<ref id="B83">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Land</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Furneaux</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>The knowledge base of the oculomotor system.</article-title>
<source>
<italic>Philos. Trans. R. Soc. Lond. B Biol. Sci.</italic>
</source>
<volume>352</volume>
<fpage>1231</fpage>
<lpage>1239</lpage>
<pub-id pub-id-type="doi">10.1098/rstb.1997.0105</pub-id>
<pub-id pub-id-type="pmid">9304689</pub-id>
</mixed-citation>
</ref>
<ref id="B84">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Land</surname>
<given-names>M. F.</given-names>
</name>
</person-group>
(
<year>1969</year>
).
<article-title>Movements of the retinae of jumping spiders (Salticidae: Dendryphantinae) in response to visual stimuli.</article-title>
<source>
<italic>J. Exp. Biol.</italic>
</source>
<volume>51</volume>
<fpage>471</fpage>
<lpage>493</lpage>
<pub-id pub-id-type="pmid">5351426</pub-id>
</mixed-citation>
</ref>
<ref id="B85">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Land</surname>
<given-names>M. F.</given-names>
</name>
</person-group>
(
<year>1972</year>
).
<article-title>“A comparison of the visual behavior of a predatory arthropod with that of a mammal,” in</article-title>
<source>
<italic>The Neurosciences Third Study Program</italic>
</source>
<role>ed.</role>
<person-group person-group-type="editor">
<name>
<surname>Wiersma</surname>
<given-names>C. A. G.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
)
<fpage>411</fpage>
<lpage>418</lpage>
</mixed-citation>
</ref>
<ref id="B86">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Land</surname>
<given-names>M. F.</given-names>
</name>
<name>
<surname>Hayhoe</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>In what ways do eye movements contribute to everyday activities?</article-title>
<source>
<italic>Vision Res.</italic>
</source>
<volume>41</volume>
<fpage>3559</fpage>
<lpage>3565</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(01)00102-X</pub-id>
<pub-id pub-id-type="pmid">11718795</pub-id>
</mixed-citation>
</ref>
<ref id="B87">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Land</surname>
<given-names>M. F.</given-names>
</name>
<name>
<surname>Tatler</surname>
<given-names>B. W.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>
<italic>Looking and Acting</italic>
.</article-title>
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B88">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lappe</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bremmer</surname>
<given-names>F</given-names>
</name>
<name>
<surname>van den Berg</surname>
<given-names>A. V.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Perception of self-motion from visual flow.</article-title>
<source>
<italic>Trends Cogn. Sci.</italic>
</source>
<volume>3</volume>
<fpage>329</fpage>
<lpage>336</lpage>
<pub-id pub-id-type="doi">10.1016/S1364-6613(99)01364-9</pub-id>
<pub-id pub-id-type="pmid">10461195</pub-id>
</mixed-citation>
</ref>
<ref id="B89">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Larsen</surname>
<given-names>O. N.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Does the environment constrain avian sound localization?</article-title>
<source>
<italic>An. Acad. Bras. Cienc.</italic>
</source>
<volume>76</volume>
<fpage>267</fpage>
<lpage>273</lpage>
<pub-id pub-id-type="doi">10.1590/S0001-37652004000200013</pub-id>
<pub-id pub-id-type="pmid">15258638</pub-id>
</mixed-citation>
</ref>
<ref id="B90">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lederman</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Klatzky</surname>
<given-names>R. L.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Haptic perception: a tutorial.</article-title>
<source>
<italic>Percept. Psychophys.</italic>
</source>
<volume>71</volume>
<fpage>1439</fpage>
<lpage>1459</lpage>
<pub-id pub-id-type="doi">10.3758/APP.71.7.1439</pub-id>
</mixed-citation>
</ref>
<ref id="B91">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Spelke</surname>
<given-names>E. S.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Young children reorient by computing layout geometry, not by matching images of the environment.</article-title>
<source>
<italic>Psychon. Bull. Rev.</italic>
</source>
<volume>18</volume>
<fpage>192</fpage>
<lpage>198</lpage>
<pub-id pub-id-type="doi">10.3758/s13423-010-0035-z</pub-id>
<pub-id pub-id-type="pmid">21327347</pub-id>
</mixed-citation>
</ref>
<ref id="B92">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lissmann</surname>
<given-names>H. W.</given-names>
</name>
</person-group>
(
<year>1958</year>
).
<article-title>On the function and evolution of electric organs in fish.</article-title>
<source>
<italic>J. Exp. Biol.</italic>
</source>
<volume>35</volume>
<fpage>156</fpage>
<lpage>191</lpage>
</mixed-citation>
</ref>
<ref id="B93">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lowe</surname>
<given-names>D. G.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>“Object recognition from local scale-invariant features,” in</article-title>
<source>
<italic>International Conference on Computer Vision</italic>
</source>
<publisher-loc>Corfu, Greece</publisher-loc>
<fpage>1150</fpage>
<lpage>1157</lpage>
</mixed-citation>
</ref>
<ref id="B94">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lowe</surname>
<given-names>D. G.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Distinctive image features from scale-invariant keypoints.</article-title>
<source>
<italic>Int. J. Comput. Vis.</italic>
</source>
<volume>60</volume>
<fpage>91</fpage>
<lpage>110</lpage>
<pub-id pub-id-type="doi">10.1023/B:VISI.0000029664.99615.94</pub-id>
</mixed-citation>
</ref>
<ref id="B95">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Marler</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Slabbekoorn</surname>
<given-names>H. W.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>
<italic>Nature’s Music</italic>
.</article-title>
<publisher-loc>San Diego</publisher-loc>
:
<publisher-name>Academic Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B96">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Marr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1982</year>
).
<article-title>
<italic>Vision</italic>
.</article-title>
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Freeman</publisher-name>
</mixed-citation>
</ref>
<ref id="B97">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Martin</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Fowlkes</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Malik</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Learning to detect natural image boundaries using local brightness, color, and texture cues.</article-title>
<source>
<italic>IEEE Trans. Pattern Anal. Mach. Intell.</italic>
</source>
<volume>26</volume>
<fpage>530</fpage>
<lpage>549</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2004.1273918</pub-id>
<pub-id pub-id-type="pmid">15460277</pub-id>
</mixed-citation>
</ref>
<ref id="B98">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Masland</surname>
<given-names>R. H.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The neuronal organization of the retina.</article-title>
<source>
<italic>Neuron</italic>
</source>
<volume>76</volume>
<fpage>266</fpage>
<lpage>280</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2012.10.002</pub-id>
<pub-id pub-id-type="pmid">23083731</pub-id>
</mixed-citation>
</ref>
<ref id="B99">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McDermott</surname>
<given-names>J. H.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Psychophysics with junctions in real images.</article-title>
<source>
<italic>Perception</italic>
</source>
<volume>33</volume>
<fpage>1101</fpage>
<lpage>1127</lpage>
<pub-id pub-id-type="doi">10.1068/p5265</pub-id>
<pub-id pub-id-type="pmid">15560510</pub-id>
</mixed-citation>
</ref>
<ref id="B100">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McDermott</surname>
<given-names>J. H.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The cocktail party problem.</article-title>
<source>
<italic>Curr. Biol.</italic>
</source>
<volume>19</volume>
<fpage>R1024</fpage>
<lpage>R1027</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2009.09.005</pub-id>
<pub-id pub-id-type="pmid">19948136</pub-id>
</mixed-citation>
</ref>
<ref id="B101">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Melcher</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Visual stability.</article-title>
<source>
<italic>Philos. Trans. R. Soc. Lond. B Biol. Sci.</italic>
</source>
<volume>366</volume>
<fpage>468</fpage>
<lpage>475</lpage>
<pub-id pub-id-type="doi">10.1098/rstb.2010.0277</pub-id>
<pub-id pub-id-type="pmid">21242136</pub-id>
</mixed-citation>
</ref>
<ref id="B102">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Melcher</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Colby</surname>
<given-names>C. L.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Trans-saccadic perception.</article-title>
<source>
<italic>Trends Cogn. Sci.</italic>
</source>
<volume>12</volume>
<fpage>466</fpage>
<lpage>473</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2008.09.003</pub-id>
<pub-id pub-id-type="pmid">18951831</pub-id>
</mixed-citation>
</ref>
<ref id="B103">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miller</surname>
<given-names>G. A</given-names>
</name>
<name>
<surname>Licklider</surname>
<given-names>J. C. R.</given-names>
</name>
</person-group>
(
<year>1950</year>
).
<article-title>The intelligibility of interrupted speech.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>22</volume>
<fpage>167</fpage>
<lpage>173</lpage>
<pub-id pub-id-type="doi">10.1121/1.1906584</pub-id>
</mixed-citation>
</ref>
<ref id="B104">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miller</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>Treat</surname>
<given-names>A. E.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>Field recordings of echolocation and social sounds from the gleaning bat
<italic>M. septentrionalis</italic>
.</article-title>
<source>
<italic>Bioacoustics</italic>
</source>
<volume>5</volume>
<fpage>67</fpage>
<lpage>87</lpage>
<pub-id pub-id-type="doi">10.1080/09524622.1993.9753230</pub-id>
</mixed-citation>
</ref>
<ref id="B105">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mogdans</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ostwald</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Schnitzler</surname>
<given-names>H.-U.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<article-title>The role of pinna movement for the localization of vertical and horizontal wire obstacles in the greater horseshoe bat,
<italic>Rhinolophus ferrumequinum</italic>
.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>84</volume>
<fpage>1676</fpage>
<lpage>1679</lpage>
<pub-id pub-id-type="doi">10.1121/1.397183</pub-id>
</mixed-citation>
</ref>
<ref id="B106">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morton</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1998a</year>
).
<article-title>Degradation and signal ranging in birds: memory matters.</article-title>
<source>
<italic>Behav. Ecol. Sociobiol.</italic>
</source>
<volume>42</volume>
<fpage>135</fpage>
<lpage>137</lpage>
<pub-id pub-id-type="doi">10.1007/s002650050421</pub-id>
</mixed-citation>
</ref>
<ref id="B107">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morton</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1998b</year>
).
<article-title>Ranging reconsidered – reply to Naguib and Wiley.</article-title>
<source>
<italic>Behav. Ecol. Sociobiol.</italic>
</source>
<volume>42</volume>
<fpage>147</fpage>
<lpage>148</lpage>
<pub-id pub-id-type="doi">10.1007/s002650050424</pub-id>
</mixed-citation>
</ref>
<ref id="B108">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morton</surname>
<given-names>E. S.</given-names>
</name>
</person-group>
(
<year>1986</year>
).
<article-title>Predictions from the ranging hypothesis for the evolution of long-distance signals in birds.</article-title>
<source>
<italic>Behaviour</italic>
</source>
<volume>99</volume>
<fpage>65</fpage>
<lpage>86</lpage>
<pub-id pub-id-type="doi">10.1163/156853986X00414</pub-id>
</mixed-citation>
</ref>
<ref id="B109">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morton</surname>
<given-names>E. S.</given-names>
</name>
<name>
<surname>Howlett</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kopysh</surname>
<given-names>N. C.</given-names>
</name>
<name>
<surname>Chiver</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Song ranging by incubating male blue-headed vireos: the importance of song representation in repertoires and implications for song delivery patterns and local/foreign dialect discrimination.</article-title>
<source>
<italic>J. Field Ornithol.</italic>
</source>
<volume>77</volume>
<fpage>291</fpage>
<lpage>301</lpage>
<pub-id pub-id-type="doi">10.1111/j.1557-9263.2006.00055.x</pub-id>
</mixed-citation>
</ref>
<ref id="B110">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moss</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Surlykke</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Auditory scene analysis by echolocation in bats.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>110</volume>
<fpage>2207</fpage>
<lpage>2226</lpage>
<pub-id pub-id-type="doi">10.1121/1.1398051</pub-id>
<pub-id pub-id-type="pmid">11681397</pub-id>
</mixed-citation>
</ref>
<ref id="B111">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moss</surname>
<given-names>C. F.</given-names>
</name>
<name>
<surname>Surlykke</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Probing the natural scene by echolocation in bats.</article-title>
<source>
<italic>Front. Behav. Neurosci.</italic>
</source>
<volume>4</volume>
:
<issue>33</issue>
</mixed-citation>
</ref>
<ref id="B112">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Murphy</surname>
<given-names>C. G.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Assessment of distance to potential mates by female barking treefrogs (
<italic>Hyla gratiosa</italic>
).</article-title>
<source>
<italic>J. Comp. Psychol.</italic>
</source>
<volume>122</volume>
<fpage>264</fpage>
<lpage>273</lpage>
<pub-id pub-id-type="doi">10.1037/0735-7036.122.3.264</pub-id>
<pub-id pub-id-type="pmid">18729654</pub-id>
</mixed-citation>
</ref>
<ref id="B113">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nagata</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Koyanagi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tsukamoto</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Saeki</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Isono</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Shichida</surname>
<given-names>Y.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2012</year>
).
<article-title>Depth perception from image defocus in a jumping spider.</article-title>
<source>
<italic>Science</italic>
</source>
<volume>335</volume>
<fpage>469</fpage>
<lpage>471</lpage>
<pub-id pub-id-type="doi">10.1126/science.1211667</pub-id>
<pub-id pub-id-type="pmid">22282813</pub-id>
</mixed-citation>
</ref>
<ref id="B114">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Naguib</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kunc</surname>
<given-names>H. P.</given-names>
</name>
<name>
<surname>Sprau</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Roth</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Amrhein</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Communication networks and spatial ecology in nightingales.</article-title>
<source>
<italic>Adv. Study Behav.</italic>
</source>
<volume>43</volume>
<fpage>239</fpage>
<lpage>271</lpage>
<pub-id pub-id-type="doi">10.1016/B978-0-12-380896-7.00005-8</pub-id>
</mixed-citation>
</ref>
<ref id="B115">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Naguib</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Wiley</surname>
<given-names>R. H.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Estimating the distance to a source of sound: mechanisms and adaptations for long-range communication.</article-title>
<source>
<italic>Anim. Behav.</italic>
</source>
<volume>62</volume>
<fpage>825</fpage>
<lpage>837</lpage>
<pub-id pub-id-type="doi">10.1006/anbe.2001.1860</pub-id>
</mixed-citation>
</ref>
<ref id="B116">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Najemnik</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Geisler</surname>
<given-names>W. S.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Optimal eye movement strategies in visual search.</article-title>
<source>
<italic>Nature</italic>
</source>
<volume>434</volume>
<fpage>387</fpage>
<lpage>391</lpage>
<pub-id pub-id-type="doi">10.1038/nature03390</pub-id>
<pub-id pub-id-type="pmid">15772663</pub-id>
</mixed-citation>
</ref>
<ref id="B117">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Najemnik</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Geisler</surname>
<given-names>W. S.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Simple summation rule for optimal fixation selection in visual search.</article-title>
<source>
<italic>Vis. Res.</italic>
</source>
<volume>49</volume>
<fpage>1286</fpage>
<lpage>1294</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2008.12.005</pub-id>
<pub-id pub-id-type="pmid">19138697</pub-id>
</mixed-citation>
</ref>
<ref id="B118">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nakayama</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>James J.</article-title>
<source>Gibson – an appreciation.
<italic> Psychol. Rev.</italic>
</source>
<volume>101</volume>
<fpage>329</fpage>
<lpage>335</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.101.2.329</pub-id>
</mixed-citation>
</ref>
<ref id="B119">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Nakayama</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>He</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>“Visual surface representation: a critical link between lower-level and higher-level vision,” in</article-title>
<source>
<italic>An Invitation to Cognitive Science: Visual Cognition</italic>
Vol. 2</source>
<role>eds</role>
<person-group person-group-type="editor">
<name>
<surname>Kosslyn</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Osherson</surname>
<given-names>D. N.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
)
<fpage>1</fpage>
<lpage>70</lpage>
</mixed-citation>
</ref>
<ref id="B120">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nelson</surname>
<given-names>B. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Duplex auditory distance assessment in a small passerine bird (
<italic>Pipilo erythrophthalmus</italic>
).</article-title>
<source>
<italic>Behav. Ecol. Sociobiol.</italic>
</source>
<volume>53</volume>
<fpage>42</fpage>
<lpage>50</lpage>
<pub-id pub-id-type="doi">10.1007/s00265-002-0546-3</pub-id>
</mixed-citation>
</ref>
<ref id="B121">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nelson</surname>
<given-names>B. S.</given-names>
</name>
<name>
<surname>Stoddard</surname>
<given-names>P. K.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Accuracy of auditory distance and azimuth perception by a passerine bird in natural habitat.</article-title>
<source>
<italic>Anim. Behav.</italic>
</source>
<volume>56</volume>
<fpage>467</fpage>
<lpage>477</lpage>
<pub-id pub-id-type="doi">10.1006/anbe.1998.0781</pub-id>
<pub-id pub-id-type="pmid">9787038</pub-id>
</mixed-citation>
</ref>
<ref id="B122">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nelson</surname>
<given-names>B. S.</given-names>
</name>
<name>
<surname>Suthers</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Sound localization in a small passerine bird: discrimination of azimuth as a function of head orientation and sound frequency.</article-title>
<source>
<italic>J. Exp. Biol.</italic>
</source>
<volume>207</volume>
<fpage>4121</fpage>
<lpage>4133</lpage>
<pub-id pub-id-type="doi">10.1242/jeb.01230</pub-id>
<pub-id pub-id-type="pmid">15498958</pub-id>
</mixed-citation>
</ref>
<ref id="B123">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nelson</surname>
<given-names>D. A.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<article-title>Feature weighting in species song recognition by the field sparrow (
<italic>Spizella pusilla</italic>
).</article-title>
<source>
<italic>Behaviour</italic>
</source>
<volume>106</volume>
<fpage>158</fpage>
<lpage>182</lpage>
<pub-id pub-id-type="doi">10.1163/156853988X00142</pub-id>
</mixed-citation>
</ref>
<ref id="B124">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nelson</surname>
<given-names>D. A.</given-names>
</name>
</person-group>
(
<year>1989</year>
).
<article-title>The importance of invariant and distinctive features in species recognition of bird song.</article-title>
<source>
<italic>Condor</italic>
</source>
<volume>91</volume>
<fpage>120</fpage>
<lpage>130</lpage>
<pub-id pub-id-type="doi">10.2307/1368155</pub-id>
</mixed-citation>
</ref>
<ref id="B125">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nelson</surname>
<given-names>M. E.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Electric fish.</article-title>
<source>
<italic>Curr. Biol.</italic>
</source>
<volume>21</volume>
<fpage>R528</fpage>
<lpage>R529</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2011.03.045</pub-id>
<pub-id pub-id-type="pmid">21783026</pub-id>
</mixed-citation>
</ref>
<ref id="B126">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Neuhoff</surname>
<given-names>J. G.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>
<italic>Ecological Psychoacoustics</italic>
.</article-title>
<publisher-loc>Burlington</publisher-loc>
:
<publisher-name>Emerald Group Publishing Limited</publisher-name>
</mixed-citation>
</ref>
<ref id="B127">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nilsson</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Gislen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Coates</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Skogh</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Garm</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Advanced optics in a jellyfish eye.</article-title>
<source>
<italic>Nature</italic>
</source>
<volume>435</volume>
<fpage>201</fpage>
<lpage>205</lpage>
<pub-id pub-id-type="doi">10.1038/nature03484</pub-id>
<pub-id pub-id-type="pmid">15889091</pub-id>
</mixed-citation>
</ref>
<ref id="B128">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>O’Connor</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Garm</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nilsson</surname>
<given-names>D.-E.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Structure and optics of the eyes of the box jellyfish
<italic>Chiropsella bronzie</italic>
.</article-title>
<source>
<italic>J. Comp. Physiol. A</italic>
</source>
<volume>195</volume>
<fpage>557</fpage>
<lpage>569</lpage>
<pub-id pub-id-type="doi">10.1007/s00359-009-0431-x</pub-id>
</mixed-citation>
</ref>
<ref id="B129">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oliva</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Torralba</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>The role of context in object recognition.</article-title>
<source>
<italic>Trends Cogn. Sci.</italic>
</source>
<volume>11</volume>
<fpage>520</fpage>
<lpage>527</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2007.09.009</pub-id>
<pub-id pub-id-type="pmid">18024143</pub-id>
</mixed-citation>
</ref>
<ref id="B130">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Palmer</surname>
<given-names>S. E.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>
<italic>Vision Science</italic>
.</article-title>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B131">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pizlo</surname>
<given-names>Z.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Perception viewed as an inverse problem.</article-title>
<source>
<italic>Vis. Res.</italic>
</source>
<volume>41</volume>
<fpage>3145</fpage>
<lpage>3161</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(01)00173-0</pub-id>
<pub-id pub-id-type="pmid">11711140</pub-id>
</mixed-citation>
</ref>
<ref id="B132">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Poggio</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Koch</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Ill-posed problems in early vision: from computational theory to analogue networks.</article-title>
<source>
<italic>Proc. R. Soc. Lond. B Biol. Sci.</italic>
</source>
<volume>226</volume>
<fpage>303</fpage>
<lpage>323</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.1985.0097</pub-id>
</mixed-citation>
</ref>
<ref id="B133">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pohl</surname>
<given-names>N. U.</given-names>
</name>
<name>
<surname>Slabbekoorn</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Klump</surname>
<given-names>G. M.</given-names>
</name>
<name>
<surname>Langemann</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Effects of signal features and environmental noise on signal detection in the great tit,
<italic>Parus major</italic>
.</article-title>
<source>
<italic>Anim. Behav.</italic>
</source>
<volume>78</volume>
<fpage>1293</fpage>
<lpage>1300</lpage>
<pub-id pub-id-type="doi">10.1016/j.anbehav.2009.09.005</pub-id>
</mixed-citation>
</ref>
<ref id="B134">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Rabiner</surname>
<given-names>L. R.</given-names>
</name>
<name>
<surname>Juang</surname>
<given-names>B.-H.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>
<italic>Fundamentals of Speech Recognition</italic>
.</article-title>
<publisher-loc>Englewood Cliff, NJ</publisher-loc>
:
<publisher-name>Prentice-Hall</publisher-name>
</mixed-citation>
</ref>
<ref id="B135">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Roberts</surname>
<given-names>L. G.</given-names>
</name>
</person-group>
(
<year>1965</year>
).
<article-title>“Machine perception of three-dimensional solids,” in</article-title>
<source>
<italic>Optical and Electro-Optical Information Processing</italic>
</source>
<role>ed.</role>
<person-group person-group-type="editor">
<name>
<surname>Tippett</surname>
<given-names>J. T.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press)</publisher-name>
</mixed-citation>
</ref>
<ref id="B136">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schnitzler</surname>
<given-names>H.-U.</given-names>
</name>
<name>
<surname>Flieger</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>Detection of oscillating target movements by echolocation in the greater horseshoe bat.</article-title>
<source>
<italic>J. Comp. Physiol.</italic>
</source>
<volume>153</volume>
<fpage>385</fpage>
<lpage>391</lpage>
<pub-id pub-id-type="doi">10.1007/BF00612592</pub-id>
</mixed-citation>
</ref>
<ref id="B137">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shi</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Malik</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Normalized cuts and image segmentation.</article-title>
<source>
<italic>IEEE Trans. Pattern Anal. Mach. Intell.</italic>
</source>
<volume>22</volume>
<fpage>888</fpage>
<lpage>905</lpage>
<pub-id pub-id-type="doi">10.1109/34.868688</pub-id>
</mixed-citation>
</ref>
<ref id="B138">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shinn-Cunningham</surname>
<given-names>B. G.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>A. K. C.</given-names>
</name>
<name>
<surname>Oxenham</surname>
<given-names>A. J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>A sound element gets lost in perceptual competition.</article-title>
<source>
<italic>Proc. Natl. Acad. Sci. U.S.A.</italic>
</source>
<volume>104</volume>
<fpage>12223</fpage>
<lpage>12227</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.0704641104</pub-id>
<pub-id pub-id-type="pmid">17615235</pub-id>
</mixed-citation>
</ref>
<ref id="B139">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shy</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Morton</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1986</year>
).
<article-title>The role of distance, familiarity, and time of day in Carolina wrens responses to conspecific songs.</article-title>
<source>
<italic>Behav. Ecol. Sociobiol.</italic>
</source>
<volume>19</volume>
<fpage>393</fpage>
<lpage>400</lpage>
<pub-id pub-id-type="doi">10.1007/BF00300541</pub-id>
</mixed-citation>
</ref>
<ref id="B140">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Siemers</surname>
<given-names>B. M.</given-names>
</name>
<name>
<surname>Schnitzler</surname>
<given-names>H.-U.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Echolocation signals reflect niche differentiation in five sympatric congeneric bat species.</article-title>
<source>
<italic>Nature</italic>
</source>
<volume>429</volume>
<fpage>657</fpage>
<lpage>661</lpage>
<pub-id pub-id-type="doi">10.1038/nature02547</pub-id>
<pub-id pub-id-type="pmid">15190352</pub-id>
</mixed-citation>
</ref>
<ref id="B141">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sorjonen</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1986</year>
).
<article-title>Factors affecting the structure of song and the singing behavior of some northern European passerine birds.</article-title>
<source>
<italic>Behaviour</italic>
</source>
<volume>98</volume>
<fpage>286</fpage>
<lpage>304</lpage>
<pub-id pub-id-type="doi">10.1163/156853986X01017</pub-id>
</mixed-citation>
</ref>
<ref id="B142">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spitzer</surname>
<given-names>M. W.</given-names>
</name>
<name>
<surname>Bala</surname>
<given-names>A. D. S.</given-names>
</name>
<name>
<surname>Takahashi</surname>
<given-names>T. T.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>A neuronal correlate of the precedence effect is associated with spatial selectivity in the barn owl’s auditory midbrain.</article-title>
<source>
<italic>J. Neurophysiol.</italic>
</source>
<volume>92</volume>
<fpage>2051</fpage>
<lpage>2070</lpage>
<pub-id pub-id-type="doi">10.1152/jn.01235.2003</pub-id>
<pub-id pub-id-type="pmid">15381741</pub-id>
</mixed-citation>
</ref>
<ref id="B143">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spitzer</surname>
<given-names>M. W.</given-names>
</name>
<name>
<surname>Takahashi</surname>
<given-names>T. T.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Sound localization by barn owls in a simulated echoic environment.</article-title>
<source>
<italic>J. Neurophysiol.</italic>
</source>
<volume>95</volume>
<fpage>3571</fpage>
<lpage>3584</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00982.2005</pub-id>
<pub-id pub-id-type="pmid">16709722</pub-id>
</mixed-citation>
</ref>
<ref id="B144">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Surlykke</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ghose</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Moss</surname>
<given-names>C. F.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Acoustic scanning of natural scenes by echolocation in the big brown bat,
<italic>Eptesicus fuscus</italic>
.</article-title>
<source>
<italic>J. Exp. Biol.</italic>
212 (Pt</source>
<volume>7)</volume>
<fpage>1011</fpage>
<lpage>1020</lpage>
<pub-id pub-id-type="doi">10.1242/jeb.024620</pub-id>
</mixed-citation>
</ref>
<ref id="B145">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tarsitano</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Jackson</surname>
<given-names>R. R.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Jumping spiders make predatory detours requiring movement away from prey.</article-title>
<source>
<italic>Behaviour</italic>
</source>
<volume>131</volume>
<fpage>65</fpage>
<lpage>73</lpage>
<pub-id pub-id-type="doi">10.1163/156853994X00217</pub-id>
</mixed-citation>
</ref>
<ref id="B146">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tatler</surname>
<given-names>B. W.</given-names>
</name>
<name>
<surname>Gilchrist</surname>
<given-names>I. D.</given-names>
</name>
<name>
<surname>Land</surname>
<given-names>M. F.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Visual memory for objects in natural scenes: from fixations to object files.</article-title>
<source>
<italic>Q. J. Exp. Psychol.</italic>
</source>
<volume>58A</volume>
<fpage>931</fpage>
<lpage>960</lpage>
<pub-id pub-id-type="doi">10.1080/02724980443000430</pub-id>
</mixed-citation>
</ref>
<ref id="B147">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tatler</surname>
<given-names>B. W.</given-names>
</name>
<name>
<surname>Land</surname>
<given-names>M. F.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Vision and the representation of the surroundings in spatial memory.</article-title>
<source>
<italic>Philos. Trans. R. Soc. Lond. B Biol. Sci.</italic>
</source>
<volume>366</volume>
<fpage>596</fpage>
<lpage>610</lpage>
<pub-id pub-id-type="doi">10.1098/rstb.2010.0188</pub-id>
<pub-id pub-id-type="pmid">21242146</pub-id>
</mixed-citation>
</ref>
<ref id="B148">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Thrun</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Burgard</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>
<italic>Probabilistic Robotics</italic>
.</article-title>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B149">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tommasi</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Chiandetti</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Pecchia</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Sovrano</surname>
<given-names>V. A.</given-names>
</name>
<name>
<surname>Vallortigara</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>From natural geometry to spatial cognition.</article-title>
<source>
<italic>Neurosci. Biobehav. Rev.</italic>
</source>
<volume>36</volume>
<fpage>799</fpage>
<lpage>824</lpage>
<pub-id pub-id-type="doi">10.1016/j.neubiorev.2011.12.007</pub-id>
<pub-id pub-id-type="pmid">22206900</pub-id>
</mixed-citation>
</ref>
<ref id="B150">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tsoar</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nathan</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Bartan</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Vyssotski</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Dell’Omo</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Ulanovsky</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Large-scale navigational map in a mammal.</article-title>
<source>
<italic>Proc. Natl. Acad. Sci. U.S.A.</italic>
</source>
<volume>108</volume>
<fpage>E718</fpage>
<lpage>E724</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.1107365108</pub-id>
<pub-id pub-id-type="pmid">21844350</pub-id>
</mixed-citation>
</ref>
<ref id="B151">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>S.-C.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Image segmentation by data-driven Markov chain Monte Carlo.</article-title>
<source>
<italic>IEEE Trans. Pattern Anal. Mach. Intell.</italic>
</source>
<volume>24</volume>
<fpage>657</fpage>
<lpage>673</lpage>
<pub-id pub-id-type="doi">10.1109/34.1000239</pub-id>
</mixed-citation>
</ref>
<ref id="B152">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ulanovsky</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Fenton</surname>
<given-names>M. B.</given-names>
</name>
<name>
<surname>Tsoar</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Korine</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Dynamics of jamming avoidance in echolocating bats.</article-title>
<source>
<italic>Proc. R. Soc. Lond. B Biol. Sci.</italic>
</source>
<volume>271</volume>
<fpage>1467</fpage>
<lpage>1475</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.2004.2750</pub-id>
</mixed-citation>
</ref>
<ref id="B153">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>von der Emde</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Distance and shape: perception of the 3-dimensional world by weakly electric fish.</article-title>
<source>
<italic>J. Physiol. Paris</italic>
</source>
<volume>98</volume>
<fpage>67</fpage>
<lpage>80</lpage>
<pub-id pub-id-type="doi">10.1016/j.jphysparis.2004.03.013</pub-id>
<pub-id pub-id-type="pmid">15477023</pub-id>
</mixed-citation>
</ref>
<ref id="B154">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>von der Emde</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Non-visual environmental imaging and object detection through active electrolocation in weakly electric fish.</article-title>
<source>
<italic>J. Comp. Physiol. A</italic>
</source>
<volume>192</volume>
<fpage>601</fpage>
<lpage>612</lpage>
<pub-id pub-id-type="doi">10.1007/s00359-006-0096-7</pub-id>
</mixed-citation>
</ref>
<ref id="B155">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>von der Emde</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Behr</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Bouton</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Engelmann</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Fetz</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Folde</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>3-dimensional scene perception during active electrolocation in a weakly electric pulse fish.</article-title>
<source>
<italic>Front. Behav. Neurosci.</italic>
</source>
<volume>4</volume>
:
<issue>26</issue>
</mixed-citation>
</ref>
<ref id="B156">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>von der Emde</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Menne</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1989</year>
).
<article-title>Discrimination of insect wingbeat-frequencies by the bat
<italic>Rhinolophus ferrumequinum</italic>
.</article-title>
<source>
<italic>J. Comp. Physiol. A</italic>
</source>
<volume>164</volume>
<fpage>663</fpage>
<lpage>671</lpage>
<pub-id pub-id-type="doi">10.1007/BF00614509</pub-id>
</mixed-citation>
</ref>
<ref id="B157">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>von der Emde</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Schnitzler</surname>
<given-names>H.-U.</given-names>
</name>
</person-group>
(
<year>1986</year>
).
<article-title>Fluttering target detection in hipposiderid bats.</article-title>
<source>
<italic>J. Comp. Physiol. A</italic>
</source>
<volume>159</volume>
<fpage>765</fpage>
<lpage>772</lpage>
<pub-id pub-id-type="doi">10.1007/BF00603730</pub-id>
</mixed-citation>
</ref>
<ref id="B158">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>von der Emde</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Schnitzler</surname>
<given-names>H.-U.</given-names>
</name>
</person-group>
(
<year>1990</year>
).
<article-title>Classification of insects by echolocating greater horseshoe bats.</article-title>
<source>
<italic>J. Comp. Physiol. A</italic>
</source>
<volume>167</volume>
<fpage>423</fpage>
<lpage>430</lpage>
</mixed-citation>
</ref>
<ref id="B159">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>von der Emde</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Schwarz</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Imaging of objects through active electrolocation in
<italic>Gnathonemus petersii</italic>
.</article-title>
<source>
<italic>J. Physiol. Paris</italic>
</source>
<volume>96</volume>
<fpage>431</fpage>
<lpage>444</lpage>
<pub-id pub-id-type="doi">10.1016/S0928-4257(03)00021-4</pub-id>
<pub-id pub-id-type="pmid">14692491</pub-id>
</mixed-citation>
</ref>
<ref id="B160">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>von Uexküll</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1926</year>
).
<article-title>
<italic>Theoretical Biology</italic>
.</article-title>
<publisher-loc>London</publisher-loc>
:
<publisher-name>Kegan Paul</publisher-name>
</mixed-citation>
</ref>
<ref id="B161">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wade</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Tatler</surname>
<given-names>B. W.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>
<italic>The Moving Tablet and the Eye: The Origins of Modern Eye Movement Research</italic>
.</article-title>
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B162">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Waltz</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1975</year>
).
<article-title>“Understanding line drawings of scenes with shadows,” in</article-title>
<source>
<italic>The Psychology of Computer Vision</italic>
</source>
<role>ed.</role>
<person-group person-group-type="editor">
<name>
<surname>Winston</surname>
<given-names>P. H.</given-names>
</name>
</person-group>
(
<publisher-loc>New York</publisher-loc>
:
<publisher-name>McGraw-Hill</publisher-name>
)
<fpage>19</fpage>
<lpage>92</lpage>
</mixed-citation>
</ref>
<ref id="B163">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wiley</surname>
<given-names>R. H.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Ranging reconsidered.</article-title>
<source>
<italic>Behav. Ecol. Sociobiol.</italic>
</source>
<volume>42</volume>
<fpage>143</fpage>
<lpage>146</lpage>
<pub-id pub-id-type="doi">10.1007/s002650050423</pub-id>
</mixed-citation>
</ref>
<ref id="B164">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wiley</surname>
<given-names>R. H.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>A new sense of the complexities of bird song.</article-title>
<source>
<italic>Auk</italic>
</source>
<volume>117</volume>
<fpage>861</fpage>
<lpage>868</lpage>
<pub-id pub-id-type="doi">10.2307/4089626</pub-id>
</mixed-citation>
</ref>
<ref id="B165">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wiley</surname>
<given-names>R. H.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>“Signal transmission in natural environments,” in</article-title>
<source>
<italic>New Encyclopedia of Neuroscience</italic>
</source>
<role>ed.</role>
<person-group person-group-type="editor">
<name>
<surname>Squire</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Academic Press</publisher-name>
)
<fpage>827</fpage>
<lpage>832</lpage>
</mixed-citation>
</ref>
<ref id="B166">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wisniewski</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Hulse</surname>
<given-names>S. H.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Auditory scene analysis in European starlings (
<italic>Sturnus vulgaris</italic>
): discrimination of song segments, their segregation from multiple and reversed conspecific songs, and evidence for conspecific song categorization.</article-title>
<source>
<italic>J. Comp. Psychol.</italic>
</source>
<volume>111</volume>
<fpage>337</fpage>
<lpage>350</lpage>
<pub-id pub-id-type="doi">10.1037/0735-7036.111.4.337</pub-id>
</mixed-citation>
</ref>
<ref id="B167">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wolbers</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Hegarty</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>What determines our navigational abilities?</article-title>
<source>
<italic>Trends Cogn. Sci.</italic>
</source>
<volume>14</volume>
<fpage>138</fpage>
<lpage>146</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2010.01.001</pub-id>
<pub-id pub-id-type="pmid">20138795</pub-id>
</mixed-citation>
</ref>
<ref id="B168">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wolbers</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Hegarty</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Büchel</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Loomis</surname>
<given-names>J. M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Spatial updating: how the brain keeps track of changing object locations during observer motion.</article-title>
<source>
<italic>Nat. Neurosci.</italic>
</source>
<volume>11</volume>
<fpage>1223</fpage>
<lpage>1230</lpage>
<pub-id pub-id-type="doi">10.1038/nn.2189</pub-id>
<pub-id pub-id-type="pmid">18776895</pub-id>
</mixed-citation>
</ref>
<ref id="B169">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wolbers</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Klatzky</surname>
<given-names>R. L.</given-names>
</name>
<name>
<surname>Loomis</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Wutte</surname>
<given-names>M. G.</given-names>
</name>
<name>
<surname>Giudice</surname>
<given-names>N. A.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Modality-independent coding of spatial layout in the human brain.</article-title>
<source>
<italic>Curr. Biol.</italic>
</source>
<volume>21</volume>
<fpage>984</fpage>
<lpage>989</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2011.04.038</pub-id>
<pub-id pub-id-type="pmid">21620708</pub-id>
</mixed-citation>
</ref>
<ref id="B170">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wong</surname>
<given-names>R. Y.</given-names>
</name>
<name>
<surname>Hopkins</surname>
<given-names>C. D.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Electrical and behavioral courtship displays in the mormyrid fish
<italic>Brienomyrus brachyistius</italic>
.</article-title>
<source>
<italic>J. Exp. Biol.</italic>
</source>
<volume>210</volume>
<fpage>2244</fpage>
<lpage>2252</lpage>
<pub-id pub-id-type="doi">10.1242/jeb.003509</pub-id>
<pub-id pub-id-type="pmid">17575030</pub-id>
</mixed-citation>
</ref>
<ref id="B171">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yuille</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kersten</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Vision as Bayesian inference: analysis by synthesis?</article-title>
<source>
<italic>Trends Cogn.Sci.</italic>
</source>
<volume>10</volume>
<fpage>301</fpage>
<lpage>308</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2006.05.002</pub-id>
<pub-id pub-id-type="pmid">16784882</pub-id>
</mixed-citation>
</ref>
<ref id="B172">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zokoll</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Klump</surname>
<given-names>G. M.</given-names>
</name>
<name>
<surname>Langemann</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Auditory short-term memory persistence for tonal signals in a songbird.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>121</volume>
<fpage>2842</fpage>
<lpage>2851</lpage>
<pub-id pub-id-type="doi">10.1121/1.2713721</pub-id>
<pub-id pub-id-type="pmid">17550183</pub-id>
</mixed-citation>
</ref>
<ref id="B173">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zokoll</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Klump</surname>
<given-names>G. M.</given-names>
</name>
<name>
<surname>Langemann</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Auditory memory for temporal characteristics of sound.</article-title>
<source>
<italic>J. Comp. Physiol. A</italic>
</source>
<volume>194</volume>
<fpage>457</fpage>
<lpage>467</lpage>
<pub-id pub-id-type="doi">10.1007/s00359-008-0318-2</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002718 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002718 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3978336
   |texte=   Scene analysis in the natural environment
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:24744740" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024