Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes

Identifieur interne : 002026 ( Ncbi/Merge ); précédent : 002025; suivant : 002027

Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes

Auteurs : James W. Lewis [États-Unis] ; William J. Talkington [États-Unis] ; Katherine C. Tallaksen [États-Unis] ; Chris A. Frum [États-Unis]

Source :

RBID : PMC:3348722

Abstract

Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds.


Url:
DOI: 10.3389/fnsys.2012.00027
PubMed: 22582038
PubMed Central: 3348722

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3348722

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes</title>
<author>
<name sortKey="Lewis, James W" sort="Lewis, James W" uniqKey="Lewis J" first="James W." last="Lewis">James W. Lewis</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Center for Neuroscience, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Center for Advanced Imaging, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Physiology and Pharmacology, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Talkington, William J" sort="Talkington, William J" uniqKey="Talkington W" first="William J." last="Talkington">William J. Talkington</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Center for Neuroscience, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Center for Advanced Imaging, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Physiology and Pharmacology, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Tallaksen, Katherine C" sort="Tallaksen, Katherine C" uniqKey="Tallaksen K" first="Katherine C." last="Tallaksen">Katherine C. Tallaksen</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Center for Advanced Imaging, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Department of Radiology, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Frum, Chris A" sort="Frum, Chris A" uniqKey="Frum C" first="Chris A." last="Frum">Chris A. Frum</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Center for Neuroscience, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Center for Advanced Imaging, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Physiology and Pharmacology, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22582038</idno>
<idno type="pmc">3348722</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3348722</idno>
<idno type="RBID">PMC:3348722</idno>
<idno type="doi">10.3389/fnsys.2012.00027</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">001F58</idno>
<idno type="wicri:Area/Pmc/Curation">001F58</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001847</idno>
<idno type="wicri:Area/Ncbi/Merge">002026</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes</title>
<author>
<name sortKey="Lewis, James W" sort="Lewis, James W" uniqKey="Lewis J" first="James W." last="Lewis">James W. Lewis</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Center for Neuroscience, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Center for Advanced Imaging, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Physiology and Pharmacology, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Talkington, William J" sort="Talkington, William J" uniqKey="Talkington W" first="William J." last="Talkington">William J. Talkington</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Center for Neuroscience, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Center for Advanced Imaging, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Physiology and Pharmacology, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Tallaksen, Katherine C" sort="Tallaksen, Katherine C" uniqKey="Tallaksen K" first="Katherine C." last="Tallaksen">Katherine C. Tallaksen</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Center for Advanced Imaging, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Department of Radiology, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Frum, Chris A" sort="Frum, Chris A" uniqKey="Frum C" first="Chris A." last="Frum">Chris A. Frum</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Center for Neuroscience, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Center for Advanced Imaging, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Physiology and Pharmacology, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Systems Neuroscience</title>
<idno type="eISSN">1662-5137</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Aglioti, S M" uniqKey="Aglioti S">S. M. Aglioti</name>
</author>
<author>
<name sortKey="Cesari, P" uniqKey="Cesari P">P. Cesari</name>
</author>
<author>
<name sortKey="Romani, M" uniqKey="Romani M">M. Romani</name>
</author>
<author>
<name sortKey="Urgesi, C" uniqKey="Urgesi C">C. Urgesi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Allison, T" uniqKey="Allison T">T. Allison</name>
</author>
<author>
<name sortKey="Mccarthy, G" uniqKey="Mccarthy G">G. McCarthy</name>
</author>
<author>
<name sortKey="Nobre, A" uniqKey="Nobre A">A. Nobre</name>
</author>
<author>
<name sortKey="Puce, A" uniqKey="Puce A">A. Puce</name>
</author>
<author>
<name sortKey="Belger, A" uniqKey="Belger A">A. Belger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Antal, T" uniqKey="Antal T">T. Antal</name>
</author>
<author>
<name sortKey="Droz, M" uniqKey="Droz M">M. Droz</name>
</author>
<author>
<name sortKey="Gyorgyi, G" uniqKey="Gyorgyi G">G. Gyorgyi</name>
</author>
<author>
<name sortKey="Racz, Z" uniqKey="Racz Z">Z. Racz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Attias, H" uniqKey="Attias H">H. Attias</name>
</author>
<author>
<name sortKey="Schreiner, C E" uniqKey="Schreiner C">C. E. Schreiner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aziz Zadeh, L" uniqKey="Aziz Zadeh L">L. Aziz-Zadeh</name>
</author>
<author>
<name sortKey="Iacoboni, M" uniqKey="Iacoboni M">M. Iacoboni</name>
</author>
<author>
<name sortKey="Zaidel, E" uniqKey="Zaidel E">E. Zaidel</name>
</author>
<author>
<name sortKey="Wilson, S" uniqKey="Wilson S">S. Wilson</name>
</author>
<author>
<name sortKey="Mazziotta, J" uniqKey="Mazziotta J">J. Mazziotta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barsalou, L W" uniqKey="Barsalou L">L. W. Barsalou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baumgart, F" uniqKey="Baumgart F">F. Baumgart</name>
</author>
<author>
<name sortKey="Gaschler Markefski, B" uniqKey="Gaschler Markefski B">B. Gaschler-Markefski</name>
</author>
<author>
<name sortKey="Woldorff, M G" uniqKey="Woldorff M">M. G. Woldorff</name>
</author>
<author>
<name sortKey="Heinze, H J" uniqKey="Heinze H">H-J. Heinze</name>
</author>
<author>
<name sortKey="Scheich, H" uniqKey="Scheich H">H. Scheich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beauchamp, M" uniqKey="Beauchamp M">M. Beauchamp</name>
</author>
<author>
<name sortKey="Lee, K" uniqKey="Lee K">K. Lee</name>
</author>
<author>
<name sortKey="Haxby, J" uniqKey="Haxby J">J. Haxby</name>
</author>
<author>
<name sortKey="Martin, A" uniqKey="Martin A">A. Martin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P. Belin</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Hoge, R" uniqKey="Hoge R">R. Hoge</name>
</author>
<author>
<name sortKey="Evans, A C" uniqKey="Evans A">A. C. Evans</name>
</author>
<author>
<name sortKey="Pike, B" uniqKey="Pike B">B. Pike</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bidet Caulet, A" uniqKey="Bidet Caulet A">A. Bidet-Caulet</name>
</author>
<author>
<name sortKey="Voisin, J" uniqKey="Voisin J">J. Voisin</name>
</author>
<author>
<name sortKey="Bertrand, O" uniqKey="Bertrand O">O. Bertrand</name>
</author>
<author>
<name sortKey="Fonlupt, P" uniqKey="Fonlupt P">P. Fonlupt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bregman, A S" uniqKey="Bregman A">A. S. Bregman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Caramazza, A" uniqKey="Caramazza A">A. Caramazza</name>
</author>
<author>
<name sortKey="Mahon, B Z" uniqKey="Mahon B">B. Z. Mahon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cavanagh, P" uniqKey="Cavanagh P">P. Cavanagh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chao, L L" uniqKey="Chao L">L. L. Chao</name>
</author>
<author>
<name sortKey="Martin, A" uniqKey="Martin A">A. Martin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chubb, C" uniqKey="Chubb C">C. Chubb</name>
</author>
<author>
<name sortKey="Sperling, G" uniqKey="Sperling G">G. Sperling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cox, R W" uniqKey="Cox R">R. W. Cox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cronbach, L J" uniqKey="Cronbach L">L. J. Cronbach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cusack, R" uniqKey="Cusack R">R. Cusack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cusack, R" uniqKey="Cusack R">R. Cusack</name>
</author>
<author>
<name sortKey="Carlyon, R P" uniqKey="Carlyon R">R. P. Carlyon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Lucia, M" uniqKey="De Lucia M">M. de Lucia</name>
</author>
<author>
<name sortKey="Camen, C" uniqKey="Camen C">C. Camen</name>
</author>
<author>
<name sortKey="Clarke, S" uniqKey="Clarke S">S. Clarke</name>
</author>
<author>
<name sortKey="Murray, M M" uniqKey="Murray M">M. M. Murray</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Doniger, G M" uniqKey="Doniger G">G. M. Doniger</name>
</author>
<author>
<name sortKey="Foxe, J J" uniqKey="Foxe J">J. J. Foxe</name>
</author>
<author>
<name sortKey="Murray, M M" uniqKey="Murray M">M. M. Murray</name>
</author>
<author>
<name sortKey="Higgins, B A" uniqKey="Higgins B">B. A. Higgins</name>
</author>
<author>
<name sortKey="Snodgrass, J G" uniqKey="Snodgrass J">J. G. Snodgrass</name>
</author>
<author>
<name sortKey="Schroeder, C E" uniqKey="Schroeder C">C. E. Schroeder</name>
</author>
<author>
<name sortKey="Javitt, D C" uniqKey="Javitt D">D. C. Javitt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Downing, P E" uniqKey="Downing P">P. E. Downing</name>
</author>
<author>
<name sortKey="Jiang, Y" uniqKey="Jiang Y">Y. Jiang</name>
</author>
<author>
<name sortKey="Shuman, M" uniqKey="Shuman M">M. Shuman</name>
</author>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dykstra, A R" uniqKey="Dykstra A">A. R. Dykstra</name>
</author>
<author>
<name sortKey="Halgren, E" uniqKey="Halgren E">E. Halgren</name>
</author>
<author>
<name sortKey="Thesen, T" uniqKey="Thesen T">T. Thesen</name>
</author>
<author>
<name sortKey="Carlson, C E" uniqKey="Carlson C">C. E. Carlson</name>
</author>
<author>
<name sortKey="Doyle, W" uniqKey="Doyle W">W. Doyle</name>
</author>
<author>
<name sortKey="Madsen, J R" uniqKey="Madsen J">J. R. Madsen</name>
</author>
<author>
<name sortKey="Eskandar, E N" uniqKey="Eskandar E">E. N. Eskandar</name>
</author>
<author>
<name sortKey="Cash, S S" uniqKey="Cash S">S. S. Cash</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Edmister, W B" uniqKey="Edmister W">W. B. Edmister</name>
</author>
<author>
<name sortKey="Talavage, T M" uniqKey="Talavage T">T. M. Talavage</name>
</author>
<author>
<name sortKey="Ledden, P J" uniqKey="Ledden P">P. J. Ledden</name>
</author>
<author>
<name sortKey="Weisskoff, R M" uniqKey="Weisskoff R">R. M. Weisskoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elhilali, M" uniqKey="Elhilali M">M. Elhilali</name>
</author>
<author>
<name sortKey="Shamma, S A" uniqKey="Shamma S">S. A. Shamma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Engel, L R" uniqKey="Engel L">L. R. Engel</name>
</author>
<author>
<name sortKey="Frum, C" uniqKey="Frum C">C. Frum</name>
</author>
<author>
<name sortKey="Puce, A" uniqKey="Puce A">A. Puce</name>
</author>
<author>
<name sortKey="Walker, N A" uniqKey="Walker N">N. A. Walker</name>
</author>
<author>
<name sortKey="Lewis, J W" uniqKey="Lewis J">J. W. Lewis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Epstein, R" uniqKey="Epstein R">R. Epstein</name>
</author>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Epstein, R A" uniqKey="Epstein R">R. A. Epstein</name>
</author>
<author>
<name sortKey="Higgins, J S" uniqKey="Higgins J">J. S. Higgins</name>
</author>
<author>
<name sortKey="Jablonski, K" uniqKey="Jablonski K">K. Jablonski</name>
</author>
<author>
<name sortKey="Feiler, A M" uniqKey="Feiler A">A. M. Feiler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Epstein, R A" uniqKey="Epstein R">R. A. Epstein</name>
</author>
<author>
<name sortKey="Morgan, L K" uniqKey="Morgan L">L. K. Morgan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Felleman, D J" uniqKey="Felleman D">D. J. Felleman</name>
</author>
<author>
<name sortKey="Van Essen, D C" uniqKey="Van Essen D">D. C. van Essen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fransson, P" uniqKey="Fransson P">P. Fransson</name>
</author>
<author>
<name sortKey="Marrelec, G" uniqKey="Marrelec G">G. Marrelec</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frith, C D" uniqKey="Frith C">C. D. Frith</name>
</author>
<author>
<name sortKey="Frith, U" uniqKey="Frith U">U. Frith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fritz, J B" uniqKey="Fritz J">J. B. Fritz</name>
</author>
<author>
<name sortKey="Elhilali, M" uniqKey="Elhilali M">M. Elhilali</name>
</author>
<author>
<name sortKey="Shamma, S A" uniqKey="Shamma S">S. A. Shamma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fritz, J B" uniqKey="Fritz J">J. B. Fritz</name>
</author>
<author>
<name sortKey="Elhilali, M" uniqKey="Elhilali M">M. Elhilali</name>
</author>
<author>
<name sortKey="David, S V" uniqKey="David S">S. V. David</name>
</author>
<author>
<name sortKey="Shamma, S A" uniqKey="Shamma S">S. A. Shamma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gazzola, V" uniqKey="Gazzola V">V. Gazzola</name>
</author>
<author>
<name sortKey="Aziz Zadeh, L" uniqKey="Aziz Zadeh L">L. Aziz-Zadeh</name>
</author>
<author>
<name sortKey="Keysers, C" uniqKey="Keysers C">C. Keysers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Glover, G H" uniqKey="Glover G">G. H. Glover</name>
</author>
<author>
<name sortKey="Law, C S" uniqKey="Law C">C. S. Law</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goll, J C" uniqKey="Goll J">J. C. Goll</name>
</author>
<author>
<name sortKey="Crutch, S J" uniqKey="Crutch S">S. J. Crutch</name>
</author>
<author>
<name sortKey="Warren, J D" uniqKey="Warren J">J. D. Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Greicius, M D" uniqKey="Greicius M">M. D. Greicius</name>
</author>
<author>
<name sortKey="Krasnow, B" uniqKey="Krasnow B">B. Krasnow</name>
</author>
<author>
<name sortKey="Reiss, A L" uniqKey="Reiss A">A. L. Reiss</name>
</author>
<author>
<name sortKey="Menon, V" uniqKey="Menon V">V. Menon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
<author>
<name sortKey="Bench, C J" uniqKey="Bench C">C. J. Bench</name>
</author>
<author>
<name sortKey="Frackowiak, R S J" uniqKey="Frackowiak R">R. S. J. Frackowiak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
<author>
<name sortKey="Kumar, S" uniqKey="Kumar S">S. Kumar</name>
</author>
<author>
<name sortKey="Warren, J D" uniqKey="Warren J">J. D. Warren</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
<author>
<name sortKey="Stephan, K E" uniqKey="Stephan K">K. E. Stephan</name>
</author>
<author>
<name sortKey="Friston, K J" uniqKey="Friston K">K. J. Friston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
<author>
<name sortKey="Warren, J D" uniqKey="Warren J">J. D. Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
<author>
<name sortKey="Warren, J D" uniqKey="Warren J">J. D. Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grill Spector, K" uniqKey="Grill Spector K">K. Grill-Spector</name>
</author>
<author>
<name sortKey="Kushnir, T" uniqKey="Kushnir T">T. Kushnir</name>
</author>
<author>
<name sortKey="Edelman, S" uniqKey="Edelman S">S. Edelman</name>
</author>
<author>
<name sortKey="Avidan, G" uniqKey="Avidan G">G. Avidan</name>
</author>
<author>
<name sortKey="Itzchak, Y" uniqKey="Itzchak Y">Y. Itzchak</name>
</author>
<author>
<name sortKey="Malach, R" uniqKey="Malach R">R. Malach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grill Spector, K" uniqKey="Grill Spector K">K. Grill-Spector</name>
</author>
<author>
<name sortKey="Kushnir, T" uniqKey="Kushnir T">T. Kushnir</name>
</author>
<author>
<name sortKey="Hendler, T" uniqKey="Hendler T">T. Hendler</name>
</author>
<author>
<name sortKey="Edelman, S" uniqKey="Edelman S">S. Edelman</name>
</author>
<author>
<name sortKey="Itzchak, Y" uniqKey="Itzchak Y">Y. Itzchak</name>
</author>
<author>
<name sortKey="Malach, R" uniqKey="Malach R">R. Malach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gron, G" uniqKey="Gron G">G. Gron</name>
</author>
<author>
<name sortKey="Wunderlich, A P" uniqKey="Wunderlich A">A. P. Wunderlich</name>
</author>
<author>
<name sortKey="Spitzer, M" uniqKey="Spitzer M">M. Spitzer</name>
</author>
<author>
<name sortKey="Tomczak, R" uniqKey="Tomczak R">R. Tomczak</name>
</author>
<author>
<name sortKey="Riepe, M W" uniqKey="Riepe M">M. W. Riepe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gygi, B" uniqKey="Gygi B">B. Gygi</name>
</author>
<author>
<name sortKey="Kidd, G R" uniqKey="Kidd G">G. R. Kidd</name>
</author>
<author>
<name sortKey="Watson, C S" uniqKey="Watson C">C. S. Watson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hall, D A" uniqKey="Hall D">D. A. Hall</name>
</author>
<author>
<name sortKey="Haggard, M P" uniqKey="Haggard M">M. P. Haggard</name>
</author>
<author>
<name sortKey="Akeroyd, M A" uniqKey="Akeroyd M">M. A. Akeroyd</name>
</author>
<author>
<name sortKey="Palmer, A R" uniqKey="Palmer A">A. R. Palmer</name>
</author>
<author>
<name sortKey="Summerfield, A Q" uniqKey="Summerfield A">A. Q. Summerfield</name>
</author>
<author>
<name sortKey="Elliott, M R" uniqKey="Elliott M">M. R. Elliott</name>
</author>
<author>
<name sortKey="Gurney, E M" uniqKey="Gurney E">E. M. Gurney</name>
</author>
<author>
<name sortKey="Bowtell, R W" uniqKey="Bowtell R">R. W. Bowtell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hassabis, D" uniqKey="Hassabis D">D. Hassabis</name>
</author>
<author>
<name sortKey="Kumaran, D" uniqKey="Kumaran D">D. Kumaran</name>
</author>
<author>
<name sortKey="Maguire, E A" uniqKey="Maguire E">E. A. Maguire</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hasson, U" uniqKey="Hasson U">U. Hasson</name>
</author>
<author>
<name sortKey="Harel, M" uniqKey="Harel M">M. Harel</name>
</author>
<author>
<name sortKey="Levy, I" uniqKey="Levy I">I. Levy</name>
</author>
<author>
<name sortKey="Malach, R" uniqKey="Malach R">R. Malach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Howard, M A" uniqKey="Howard M">M. A. Howard</name>
</author>
<author>
<name sortKey="Volkov, I O" uniqKey="Volkov I">I. O. Volkov</name>
</author>
<author>
<name sortKey="Mirsky, R" uniqKey="Mirsky R">R. Mirsky</name>
</author>
<author>
<name sortKey="Garell, P C" uniqKey="Garell P">P. C. Garell</name>
</author>
<author>
<name sortKey="Noh, M D" uniqKey="Noh M">M. D. Noh</name>
</author>
<author>
<name sortKey="Granner, M" uniqKey="Granner M">M. Granner</name>
</author>
<author>
<name sortKey="Damasio, H" uniqKey="Damasio H">H. Damasio</name>
</author>
<author>
<name sortKey="Steinschneider, M" uniqKey="Steinschneider M">M. Steinschneider</name>
</author>
<author>
<name sortKey="Reale, R A" uniqKey="Reale R">R. A. Reale</name>
</author>
<author>
<name sortKey="Hind, J E" uniqKey="Hind J">J. E. Hind</name>
</author>
<author>
<name sortKey="Brugge, J F" uniqKey="Brugge J">J. F. Brugge</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huddleston, W E" uniqKey="Huddleston W">W. E. Huddleston</name>
</author>
<author>
<name sortKey="Lewis, J W" uniqKey="Lewis J">J. W. Lewis</name>
</author>
<author>
<name sortKey="Phinney, R E Jr" uniqKey="Phinney R">R. E. Jr. Phinney</name>
</author>
<author>
<name sortKey="De Yoe, E A" uniqKey="De Yoe E">E. A. de Yoe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hughes, H C" uniqKey="Hughes H">H. C. Hughes</name>
</author>
<author>
<name sortKey="Darcey, T M" uniqKey="Darcey T">T. M. Darcey</name>
</author>
<author>
<name sortKey="Barkan, H I" uniqKey="Barkan H">H. I. Barkan</name>
</author>
<author>
<name sortKey="Williamson, P D" uniqKey="Williamson P">P. D. Williamson</name>
</author>
<author>
<name sortKey="Roberts, D W" uniqKey="Roberts D">D. W. Roberts</name>
</author>
<author>
<name sortKey="Aslin, C H" uniqKey="Aslin C">C. H. Aslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Husain, F T" uniqKey="Husain F">F. T. Husain</name>
</author>
<author>
<name sortKey="Tagamets, M A" uniqKey="Tagamets M">M. A. Tagamets</name>
</author>
<author>
<name sortKey="Fromm, S J" uniqKey="Fromm S">S. J. Fromm</name>
</author>
<author>
<name sortKey="Braun, A R" uniqKey="Braun A">A. R. Braun</name>
</author>
<author>
<name sortKey="Horwitz, B" uniqKey="Horwitz B">B. Horwitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Iacoboni, M" uniqKey="Iacoboni M">M. Iacoboni</name>
</author>
<author>
<name sortKey="Molnar Szakacs, I" uniqKey="Molnar Szakacs I">I. Molnar-Szakacs</name>
</author>
<author>
<name sortKey="Gallese, V" uniqKey="Gallese V">V. Gallese</name>
</author>
<author>
<name sortKey="Buccino, G" uniqKey="Buccino G">G. Buccino</name>
</author>
<author>
<name sortKey="Mazziotta, J C" uniqKey="Mazziotta J">J. C. Mazziotta</name>
</author>
<author>
<name sortKey="Rizzolatti, G" uniqKey="Rizzolatti G">G. Rizzolatti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johansson, G" uniqKey="Johansson G">G. Johansson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Julesz, B" uniqKey="Julesz B">B. Julesz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kaas, J H" uniqKey="Kaas J">J. H. Kaas</name>
</author>
<author>
<name sortKey="Hackett, T A" uniqKey="Hackett T">T. A. Hackett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kaas, J H" uniqKey="Kaas J">J. H. Kaas</name>
</author>
<author>
<name sortKey="Hackett, T A" uniqKey="Hackett T">T. A. Hackett</name>
</author>
<author>
<name sortKey="Tramo, M J" uniqKey="Tramo M">M. J. Tramo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
<author>
<name sortKey="Chun, M M" uniqKey="Chun M">M. M. Chun</name>
</author>
<author>
<name sortKey="Mcdermott, J" uniqKey="Mcdermott J">J. McDermott</name>
</author>
<author>
<name sortKey="Ledden, P J" uniqKey="Ledden P">P. J. Ledden</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
<author>
<name sortKey="Mcdermott, J" uniqKey="Mcdermott J">J. McDermott</name>
</author>
<author>
<name sortKey="Chun, M M" uniqKey="Chun M">M. M. Chun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="King, A J" uniqKey="King A">A. J. King</name>
</author>
<author>
<name sortKey="Nelken, I" uniqKey="Nelken I">I. Nelken</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kohler, E" uniqKey="Kohler E">E. Kohler</name>
</author>
<author>
<name sortKey="Keysers, C" uniqKey="Keysers C">C. Keysers</name>
</author>
<author>
<name sortKey="Umilta, A" uniqKey="Umilta A">A. Umilta</name>
</author>
<author>
<name sortKey="Fogassi, L" uniqKey="Fogassi L">L. Fogassi</name>
</author>
<author>
<name sortKey="Gallese, V" uniqKey="Gallese V">V. Gallese</name>
</author>
<author>
<name sortKey="Rizzolatti, G" uniqKey="Rizzolatti G">G. Rizzolatti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kourtzi, Z" uniqKey="Kourtzi Z">Z. Kourtzi</name>
</author>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kumar, S" uniqKey="Kumar S">S. Kumar</name>
</author>
<author>
<name sortKey="Stephan, K E" uniqKey="Stephan K">K. E. Stephan</name>
</author>
<author>
<name sortKey="Warren, J D" uniqKey="Warren J">J. D. Warren</name>
</author>
<author>
<name sortKey="Friston, K J" uniqKey="Friston K">K. J. Friston</name>
</author>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Laaksonen, J T" uniqKey="Laaksonen J">J. T. Laaksonen</name>
</author>
<author>
<name sortKey="Markus Koskela, J" uniqKey="Markus Koskela J">J. Markus Koskela</name>
</author>
<author>
<name sortKey="Oja, E" uniqKey="Oja E">E. Oja</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leaver, A M" uniqKey="Leaver A">A. M. Leaver</name>
</author>
<author>
<name sortKey="Rauschecker, J P" uniqKey="Rauschecker J">J. P. Rauschecker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leech, R" uniqKey="Leech R">R. Leech</name>
</author>
<author>
<name sortKey="Holt, L L" uniqKey="Holt L">L. L. Holt</name>
</author>
<author>
<name sortKey="Devlin, J T" uniqKey="Devlin J">J. T. Devlin</name>
</author>
<author>
<name sortKey="Dick, F" uniqKey="Dick F">F. Dick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, J W" uniqKey="Lewis J">J. W. Lewis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, J W" uniqKey="Lewis J">J. W. Lewis</name>
</author>
<author>
<name sortKey="Beauchamp, M S" uniqKey="Beauchamp M">M. S. Beauchamp</name>
</author>
<author>
<name sortKey="De Yoe, E A" uniqKey="De Yoe E">E. A. de Yoe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, J W" uniqKey="Lewis J">J. W. Lewis</name>
</author>
<author>
<name sortKey="Phinney, R E" uniqKey="Phinney R">R. E. Phinney</name>
</author>
<author>
<name sortKey="Brefczynski Lewis, J A" uniqKey="Brefczynski Lewis J">J. A. Brefczynski-Lewis</name>
</author>
<author>
<name sortKey="De Yoe, E A" uniqKey="De Yoe E">E. A. de Yoe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, J W" uniqKey="Lewis J">J. W. Lewis</name>
</author>
<author>
<name sortKey="Talkington, W J" uniqKey="Talkington W">W. J. Talkington</name>
</author>
<author>
<name sortKey="Puce, A" uniqKey="Puce A">A. Puce</name>
</author>
<author>
<name sortKey="Engel, L R" uniqKey="Engel L">L. R. Engel</name>
</author>
<author>
<name sortKey="Frum, C" uniqKey="Frum C">C. Frum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, J W" uniqKey="Lewis J">J. W. Lewis</name>
</author>
<author>
<name sortKey="Talkington, W J" uniqKey="Talkington W">W. J. Talkington</name>
</author>
<author>
<name sortKey="Walker, N A" uniqKey="Walker N">N. A. Walker</name>
</author>
<author>
<name sortKey="Spirou, G A" uniqKey="Spirou G">G. A. Spirou</name>
</author>
<author>
<name sortKey="Jajosky, A" uniqKey="Jajosky A">A. Jajosky</name>
</author>
<author>
<name sortKey="Frum, C" uniqKey="Frum C">C. Frum</name>
</author>
<author>
<name sortKey="Brefczynski Lewis, J A" uniqKey="Brefczynski Lewis J">J. A. Brefczynski-Lewis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, J W" uniqKey="Lewis J">J. W. Lewis</name>
</author>
<author>
<name sortKey="Wightman, F L" uniqKey="Wightman F">F. L. Wightman</name>
</author>
<author>
<name sortKey="Brefczynski, J A" uniqKey="Brefczynski J">J. A. Brefczynski</name>
</author>
<author>
<name sortKey="Phinney, R E" uniqKey="Phinney R">R. E. Phinney</name>
</author>
<author>
<name sortKey="Binder, J R" uniqKey="Binder J">J. R. Binder</name>
</author>
<author>
<name sortKey="De Yoe, E A" uniqKey="De Yoe E">E. A. de Yoe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macevoy, S P" uniqKey="Macevoy S">S. P. Macevoy</name>
</author>
<author>
<name sortKey="Epstein, R A" uniqKey="Epstein R">R. A. Epstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maeder, P P" uniqKey="Maeder P">P. P. Maeder</name>
</author>
<author>
<name sortKey="Meuli, R A" uniqKey="Meuli R">R. A. Meuli</name>
</author>
<author>
<name sortKey="Adriani, M" uniqKey="Adriani M">M. Adriani</name>
</author>
<author>
<name sortKey="Bellmann, A" uniqKey="Bellmann A">A. Bellmann</name>
</author>
<author>
<name sortKey="Fornari, E" uniqKey="Fornari E">E. Fornari</name>
</author>
<author>
<name sortKey="Thiran, J P" uniqKey="Thiran J">J. P. Thiran</name>
</author>
<author>
<name sortKey="Pittet, A" uniqKey="Pittet A">A. Pittet</name>
</author>
<author>
<name sortKey="Clarke, S" uniqKey="Clarke S">S. Clarke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="M Kel, J P" uniqKey="M Kel J">J. P. Mäkelä</name>
</author>
<author>
<name sortKey="Mcevoy, L" uniqKey="Mcevoy L">L. McEvoy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Malach, R" uniqKey="Malach R">R. Malach</name>
</author>
<author>
<name sortKey="Reppas, J B" uniqKey="Reppas J">J. B. Reppas</name>
</author>
<author>
<name sortKey="Benson, R R" uniqKey="Benson R">R. R. Benson</name>
</author>
<author>
<name sortKey="Kwong, K K" uniqKey="Kwong K">K. K. Kwong</name>
</author>
<author>
<name sortKey="Jiang, H" uniqKey="Jiang H">H. Jiang</name>
</author>
<author>
<name sortKey="Kennedy, W A" uniqKey="Kennedy W">W. A. Kennedy</name>
</author>
<author>
<name sortKey="Ledden, P J" uniqKey="Ledden P">P. J. Ledden</name>
</author>
<author>
<name sortKey="Brady, T J" uniqKey="Brady T">T. J. Brady</name>
</author>
<author>
<name sortKey="Rosen, B R" uniqKey="Rosen B">B. R. Rosen</name>
</author>
<author>
<name sortKey="Tootell, R B H" uniqKey="Tootell R">R. B. H. Tootell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martin, A" uniqKey="Martin A">A. Martin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mccarthy, G" uniqKey="Mccarthy G">G. McCarthy</name>
</author>
<author>
<name sortKey="Puce, A" uniqKey="Puce A">A. Puce</name>
</author>
<author>
<name sortKey="Gore, J C" uniqKey="Gore J">J. C. Gore</name>
</author>
<author>
<name sortKey="Allison, T" uniqKey="Allison T">T. Allison</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcdermott, J H" uniqKey="Mcdermott J">J. H. McDermott</name>
</author>
<author>
<name sortKey="Oxenham, A J" uniqKey="Oxenham A">A. J. Oxenham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcdermott, J H" uniqKey="Mcdermott J">J. H. McDermott</name>
</author>
<author>
<name sortKey="Simoncelli, E P" uniqKey="Simoncelli E">E. P. Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Medvedev, A V" uniqKey="Medvedev A">A. V. Medvedev</name>
</author>
<author>
<name sortKey="Chiao, F" uniqKey="Chiao F">F. Chiao</name>
</author>
<author>
<name sortKey="Kanwal, J S" uniqKey="Kanwal J">J. S. Kanwal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Minda, J P" uniqKey="Minda J">J. P. Minda</name>
</author>
<author>
<name sortKey="Ross, B H" uniqKey="Ross B">B. H. Ross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mormann, F" uniqKey="Mormann F">F. Mormann</name>
</author>
<author>
<name sortKey="Dubois, J" uniqKey="Dubois J">J. Dubois</name>
</author>
<author>
<name sortKey="Kornblith, S" uniqKey="Kornblith S">S. Kornblith</name>
</author>
<author>
<name sortKey="Milosavljevic, M" uniqKey="Milosavljevic M">M. Milosavljevic</name>
</author>
<author>
<name sortKey="Cerf, M" uniqKey="Cerf M">M. Cerf</name>
</author>
<author>
<name sortKey="Ison, M" uniqKey="Ison M">M. Ison</name>
</author>
<author>
<name sortKey="Tsuchiya, N" uniqKey="Tsuchiya N">N. Tsuchiya</name>
</author>
<author>
<name sortKey="Kraskov, A" uniqKey="Kraskov A">A. Kraskov</name>
</author>
<author>
<name sortKey="Quiroga, R Q" uniqKey="Quiroga R">R. Q. Quiroga</name>
</author>
<author>
<name sortKey="Adolphs, R" uniqKey="Adolphs R">R. Adolphs</name>
</author>
<author>
<name sortKey="Fried, I" uniqKey="Fried I">I. Fried</name>
</author>
<author>
<name sortKey="Koch, C" uniqKey="Koch C">C. Koch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morosan, P" uniqKey="Morosan P">P. Morosan</name>
</author>
<author>
<name sortKey="Rademacher, J" uniqKey="Rademacher J">J. Rademacher</name>
</author>
<author>
<name sortKey="Schleicher, A" uniqKey="Schleicher A">A. Schleicher</name>
</author>
<author>
<name sortKey="Amunts, K" uniqKey="Amunts K">K. Amunts</name>
</author>
<author>
<name sortKey="Schormann, T" uniqKey="Schormann T">T. Schormann</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K. Zilles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Murray, S O" uniqKey="Murray S">S. O. Murray</name>
</author>
<author>
<name sortKey="Newman, A J" uniqKey="Newman A">A. J. Newman</name>
</author>
<author>
<name sortKey="Roder, B" uniqKey="Roder B">B. Roder</name>
</author>
<author>
<name sortKey="Mitchell, T V" uniqKey="Mitchell T">T. V. Mitchell</name>
</author>
<author>
<name sortKey="Takahashi, T" uniqKey="Takahashi T">T. Takahashi</name>
</author>
<author>
<name sortKey="Neville, H J" uniqKey="Neville H">H. J. Neville</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelken, I" uniqKey="Nelken I">I. Nelken</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nunnally, J C" uniqKey="Nunnally J">J. C. Nunnally</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Obleser, J" uniqKey="Obleser J">J. Obleser</name>
</author>
<author>
<name sortKey="Eisner, F" uniqKey="Eisner F">F. Eisner</name>
</author>
<author>
<name sortKey="Kotz, S A" uniqKey="Kotz S">S. A. Kotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Obleser, J" uniqKey="Obleser J">J. Obleser</name>
</author>
<author>
<name sortKey="Zimmermann, J" uniqKey="Zimmermann J">J. Zimmermann</name>
</author>
<author>
<name sortKey="Van Meter, J" uniqKey="Van Meter J">J. van Meter</name>
</author>
<author>
<name sortKey="Rauschecker, J P" uniqKey="Rauschecker J">J. P. Rauschecker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Overath, T" uniqKey="Overath T">T. Overath</name>
</author>
<author>
<name sortKey="Kumar, S" uniqKey="Kumar S">S. Kumar</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
<author>
<name sortKey="Von Kriegstein, K" uniqKey="Von Kriegstein K">K. von Kriegstein</name>
</author>
<author>
<name sortKey="Cusack, R" uniqKey="Cusack R">R. Cusack</name>
</author>
<author>
<name sortKey="Rees, A" uniqKey="Rees A">A. Rees</name>
</author>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pelphrey, K A" uniqKey="Pelphrey K">K. A. Pelphrey</name>
</author>
<author>
<name sortKey="Morris, J P" uniqKey="Morris J">J. P. Morris</name>
</author>
<author>
<name sortKey="Mccarthy, G" uniqKey="Mccarthy G">G. McCarthy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rademacher, J" uniqKey="Rademacher J">J. Rademacher</name>
</author>
<author>
<name sortKey="Morosan, P" uniqKey="Morosan P">P. Morosan</name>
</author>
<author>
<name sortKey="Schormann, T" uniqKey="Schormann T">T. Schormann</name>
</author>
<author>
<name sortKey="Schleicher, A" uniqKey="Schleicher A">A. Schleicher</name>
</author>
<author>
<name sortKey="Werner, C" uniqKey="Werner C">C. Werner</name>
</author>
<author>
<name sortKey="Freund, H J" uniqKey="Freund H">H. J. Freund</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K. Zilles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Raichle, M E" uniqKey="Raichle M">M. E. Raichle</name>
</author>
<author>
<name sortKey="Macleod, A M" uniqKey="Macleod A">A. M. MacLeod</name>
</author>
<author>
<name sortKey="Snyder, A Z" uniqKey="Snyder A">A. Z. Snyder</name>
</author>
<author>
<name sortKey="Powers, W J" uniqKey="Powers W">W. J. Powers</name>
</author>
<author>
<name sortKey="Gusnard, D A" uniqKey="Gusnard D">D. A. Gusnard</name>
</author>
<author>
<name sortKey="Shulman, G L" uniqKey="Shulman G">G. L. Shulman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rauschecker, J P" uniqKey="Rauschecker J">J. P. Rauschecker</name>
</author>
<author>
<name sortKey="Scott, S K" uniqKey="Scott S">S. K. Scott</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rauschecker, J P" uniqKey="Rauschecker J">J. P. Rauschecker</name>
</author>
<author>
<name sortKey="Tian, B" uniqKey="Tian B">B. Tian</name>
</author>
<author>
<name sortKey="Hauser, M" uniqKey="Hauser M">M. Hauser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reddy, R K" uniqKey="Reddy R">R. K. Reddy</name>
</author>
<author>
<name sortKey="Ramachandra, V" uniqKey="Ramachandra V">V. Ramachandra</name>
</author>
<author>
<name sortKey="Kumar, N" uniqKey="Kumar N">N. Kumar</name>
</author>
<author>
<name sortKey="Singh, N C" uniqKey="Singh N">N. C. Singh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rizzolatti, G" uniqKey="Rizzolatti G">G. Rizzolatti</name>
</author>
<author>
<name sortKey="Craighero, L" uniqKey="Craighero L">L. Craighero</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rizzolatti, G" uniqKey="Rizzolatti G">G. Rizzolatti</name>
</author>
<author>
<name sortKey="Luppino, G" uniqKey="Luppino G">G. Luppino</name>
</author>
<author>
<name sortKey="Matelli, M" uniqKey="Matelli M">M. Matelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rosch, E H" uniqKey="Rosch E">E. H. Rosch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rutishauser, U" uniqKey="Rutishauser U">U. Rutishauser</name>
</author>
<author>
<name sortKey="Tudusciuc, O" uniqKey="Tudusciuc O">O. Tudusciuc</name>
</author>
<author>
<name sortKey="Neumann, D" uniqKey="Neumann D">D. Neumann</name>
</author>
<author>
<name sortKey="Mamelak, A N" uniqKey="Mamelak A">A. N. Mamelak</name>
</author>
<author>
<name sortKey="Heller, A C" uniqKey="Heller A">A. C. Heller</name>
</author>
<author>
<name sortKey="Ross, I B" uniqKey="Ross I">I. B. Ross</name>
</author>
<author>
<name sortKey="Philpott, L" uniqKey="Philpott L">L. Philpott</name>
</author>
<author>
<name sortKey="Sutherling, W W" uniqKey="Sutherling W">W. W. Sutherling</name>
</author>
<author>
<name sortKey="Adolphs, R" uniqKey="Adolphs R">R. Adolphs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sathian, K" uniqKey="Sathian K">K. Sathian</name>
</author>
<author>
<name sortKey="Lacey, S" uniqKey="Lacey S">S. Lacey</name>
</author>
<author>
<name sortKey="Stilla, R" uniqKey="Stilla R">R. Stilla</name>
</author>
<author>
<name sortKey="Gibson, G O" uniqKey="Gibson G">G. O. Gibson</name>
</author>
<author>
<name sortKey="Deshpande, G" uniqKey="Deshpande G">G. Deshpande</name>
</author>
<author>
<name sortKey="Hu, X" uniqKey="Hu X">X. Hu</name>
</author>
<author>
<name sortKey="Laconte, S" uniqKey="Laconte S">S. Laconte</name>
</author>
<author>
<name sortKey="Glielmi, C" uniqKey="Glielmi C">C. Glielmi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Talairach, J" uniqKey="Talairach J">J. Talairach</name>
</author>
<author>
<name sortKey="Tournoux, P" uniqKey="Tournoux P">P. Tournoux</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Talkington, W J" uniqKey="Talkington W">W. J. Talkington</name>
</author>
<author>
<name sortKey="Rapuano, K M" uniqKey="Rapuano K">K. M. Rapuano</name>
</author>
<author>
<name sortKey="Hitt, L" uniqKey="Hitt L">L. Hitt</name>
</author>
<author>
<name sortKey="Frum, C A" uniqKey="Frum C">C. A. Frum</name>
</author>
<author>
<name sortKey="Lewis, J W" uniqKey="Lewis J">J. W. Lewis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tchernichovski, O" uniqKey="Tchernichovski O">O. Tchernichovski</name>
</author>
<author>
<name sortKey="Mitra, P P" uniqKey="Mitra P">P. P. Mitra</name>
</author>
<author>
<name sortKey="Lints, T" uniqKey="Lints T">T. Lints</name>
</author>
<author>
<name sortKey="Nottebohm, F" uniqKey="Nottebohm F">F. Nottebohm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teki, S" uniqKey="Teki S">S. Teki</name>
</author>
<author>
<name sortKey="Chait, M" uniqKey="Chait M">M. Chait</name>
</author>
<author>
<name sortKey="Kumar, S" uniqKey="Kumar S">S. Kumar</name>
</author>
<author>
<name sortKey="Von Kriegstein, K" uniqKey="Von Kriegstein K">K. von Kriegstein</name>
</author>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tootell, R B" uniqKey="Tootell R">R. B. Tootell</name>
</author>
<author>
<name sortKey="Mendola, J D" uniqKey="Mendola J">J. D. Mendola</name>
</author>
<author>
<name sortKey="Hadjikhani, N K" uniqKey="Hadjikhani N">N. K. Hadjikhani</name>
</author>
<author>
<name sortKey="Liu, A K" uniqKey="Liu A">A. K. Liu</name>
</author>
<author>
<name sortKey="Dale, A M" uniqKey="Dale A">A. M. Dale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Essen, D C" uniqKey="Van Essen D">D. C. van Essen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Essen, D C" uniqKey="Van Essen D">D. C. van Essen</name>
</author>
<author>
<name sortKey="Drury, H A" uniqKey="Drury H">H. A. Drury</name>
</author>
<author>
<name sortKey="Dickson, J" uniqKey="Dickson J">J. Dickson</name>
</author>
<author>
<name sortKey="Harwell, J" uniqKey="Harwell J">J. Harwell</name>
</author>
<author>
<name sortKey="Hanlon, D" uniqKey="Hanlon D">D. Hanlon</name>
</author>
<author>
<name sortKey="Anderson, C H" uniqKey="Anderson C">C. H. Anderson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vogt, B A" uniqKey="Vogt B">B. A. Vogt</name>
</author>
<author>
<name sortKey="Finch, D M" uniqKey="Finch D">D. M. Finch</name>
</author>
<author>
<name sortKey="Olson, C R" uniqKey="Olson C">C. R. Olson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Voss, R F" uniqKey="Voss R">R. F. Voss</name>
</author>
<author>
<name sortKey="Clarke, J" uniqKey="Clarke J">J. Clarke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, J" uniqKey="Warren J">J. Warren</name>
</author>
<author>
<name sortKey="Zielinski, B" uniqKey="Zielinski B">B. Zielinski</name>
</author>
<author>
<name sortKey="Green, G" uniqKey="Green G">G. Green</name>
</author>
<author>
<name sortKey="Rauschecker, J" uniqKey="Rauschecker J">J. Rauschecker</name>
</author>
<author>
<name sortKey="Griffiths, T" uniqKey="Griffiths T">T. Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, R M" uniqKey="Warren R">R. M. Warren</name>
</author>
<author>
<name sortKey="Obusek, C J" uniqKey="Obusek C">C. J. Obusek</name>
</author>
<author>
<name sortKey="Ackroff, J M" uniqKey="Ackroff J">J. M. Ackroff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Woods, D L" uniqKey="Woods D">D. L. Woods</name>
</author>
<author>
<name sortKey="Herron, T J" uniqKey="Herron T">T. J. Herron</name>
</author>
<author>
<name sortKey="Cate, A D" uniqKey="Cate A">A. D. Cate</name>
</author>
<author>
<name sortKey="Yund, E W" uniqKey="Yund E">E. W. Yund</name>
</author>
<author>
<name sortKey="Stecker, G C" uniqKey="Stecker G">G. C. Stecker</name>
</author>
<author>
<name sortKey="Rinne, T" uniqKey="Rinne T">T. Rinne</name>
</author>
<author>
<name sortKey="Kang, X" uniqKey="Kang X">X. Kang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Bouffard, M" uniqKey="Bouffard M">M. Bouffard</name>
</author>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P. Belin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Evans, A C" uniqKey="Evans A">A. C. Evans</name>
</author>
<author>
<name sortKey="Meyer, E" uniqKey="Meyer E">E. Meyer</name>
</author>
<author>
<name sortKey="Gjedde, A" uniqKey="Gjedde A">A. Gjedde</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Syst Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Syst Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Syst. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Systems Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5137</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22582038</article-id>
<article-id pub-id-type="pmc">3348722</article-id>
<article-id pub-id-type="doi">10.3389/fnsys.2012.00027</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Lewis</surname>
<given-names>James W.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Talkington</surname>
<given-names>William J.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Tallaksen</surname>
<given-names>Katherine C.</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Frum</surname>
<given-names>Chris A.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Center for Neuroscience, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Center for Advanced Imaging, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Department of Physiology and Pharmacology, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</aff>
<aff id="aff4">
<sup>4</sup>
<institution>Department of Radiology, West Virginia University, Morgantown</institution>
<country>WV, USA</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Raphael Pinaud, Northwestern University, USA</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Sundeep Teki, University College London, UK; Hirohito M. Kondo, NTT Corporation, Japan</p>
</fn>
<corresp id="fn001">*Correspondence: James W. Lewis, Department of Physiology and Pharmacology, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA. e-mail:
<email xlink:type="simple">jwlewis@hsc.wvu.edu</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>09</day>
<month>5</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<volume>6</volume>
<elocation-id>27</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>9</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>01</day>
<month>4</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2012 Lewis, Talkington, Tallaksen and Frum.</copyright-statement>
<copyright-year>2012</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement">
<license-p>This is an open-access article distributed under the terms of the
<uri xlink:type="simple" xlink:href="http://creativecommons.org/licenses/by-nc/3.0/">Creative Commons Attribution Non Commercial License</uri>
, which permits non-commercial use, distribution, and reproduction in other forums, provided the original authors and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds.</p>
</abstract>
<kwd-group>
<kwd>signal feature extraction</kwd>
<kwd>motion processing</kwd>
<kwd>auditory perception</kwd>
<kwd>functional MRI</kwd>
<kwd>natural sound categorization</kwd>
<kwd>entropy</kwd>
<kwd>spectral structure variation</kwd>
</kwd-group>
<counts>
<fig-count count="4"></fig-count>
<table-count count="1"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="116"></ref-count>
<page-count count="15"></page-count>
<word-count count="12201"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>For sensory systems, feature extraction models (Laaksonen et al.,
<xref rid="B64" ref-type="bibr">2004</xref>
) represent potential neuronal mechanisms that may develop to efficiently segment and distinguish objects or events based on salient features and components within a scene. Through experience with visual and acoustic scenes, semantically related object groupings or classes of behaviorally relevant objects and/or events (Rosch,
<xref rid="B99" ref-type="bibr">1973</xref>
; Minda and Ross,
<xref rid="B82" ref-type="bibr">2004</xref>
) may then become differentially mapped and self-organized across cortical network representations. This in part may lead to the development of cortical regions showing preferential or selective activation to the various visual and auditory “object categories” reported to date.</p>
<p>In the visual system, several brain regions are reported to be sensitive or selective for different object categories, including human faces (Allison et al.,
<xref rid="B2" ref-type="bibr">1994</xref>
; Kanwisher et al.,
<xref rid="B59" ref-type="bibr">1997</xref>
; McCarthy et al.,
<xref rid="B78" ref-type="bibr">1997</xref>
), animal faces (Mormann et al.,
<xref rid="B83" ref-type="bibr">2011</xref>
; Rutishauser et al.,
<xref rid="B100" ref-type="bibr">2011</xref>
), scenes or places (Epstein and Kanwisher,
<xref rid="B26" ref-type="bibr">1998</xref>
; Gron et al.,
<xref rid="B44" ref-type="bibr">2000</xref>
), human body parts (Downing et al.,
<xref rid="B22" ref-type="bibr">2001</xref>
), buildings (Hasson et al.,
<xref rid="B48" ref-type="bibr">2003</xref>
), or animals versus tools (Chao and Martin,
<xref rid="B14" ref-type="bibr">2000</xref>
; Beauchamp et al.,
<xref rid="B8" ref-type="bibr">2002</xref>
). In contrast to object processing, other brain regions (e.g., parahippocampal, retrosplenial, and some occipital areas) are more sensitive to processing visual scenes (Epstein and Kanwisher,
<xref rid="B26" ref-type="bibr">1998</xref>
; Epstein et al.,
<xref rid="B27" ref-type="bibr">2007</xref>
; Epstein and Morgan,
<xref rid="B28" ref-type="bibr">2011</xref>
). However, preceding many of these scene- or object-sensitive stages in cortex are earlier stages that incorporate relatively low-level visual features such as motion and form. For instance, the posterior superior temporal sulci (pSTS) are preferentially activated by biological motion (Johansson,
<xref rid="B54" ref-type="bibr">1973</xref>
) versus rigid body motion attributes (Frith and Frith,
<xref rid="B31" ref-type="bibr">1999</xref>
; Lewis et al.,
<xref rid="B68" ref-type="bibr">2000</xref>
; Beauchamp et al.,
<xref rid="B8" ref-type="bibr">2002</xref>
; Pelphrey et al.,
<xref rid="B91" ref-type="bibr">2004</xref>
), which contributes to the segmentation of animate versus inanimate objects. Additionally, portions of the lateral occipital cortices (LOC) are preferentially responsive to object forms as opposed to textures or visual noise patterns, which are otherwise matched for low-level features such as brightness, contrast, and spatial frequencies (Malach et al.,
<xref rid="B76" ref-type="bibr">1995</xref>
; Kanwisher et al.,
<xref rid="B58" ref-type="bibr">1996</xref>
). Portions of the LOC also show relatively invariant responses to object size and/or location in the visual field (Grill-Spector et al.,
<xref rid="B43" ref-type="bibr">1998</xref>
,
<xref rid="B42" ref-type="bibr">1999</xref>
; Tootell et al.,
<xref rid="B105" ref-type="bibr">1998</xref>
; Doniger et al.,
<xref rid="B21" ref-type="bibr">2000</xref>
; Kourtzi and Kanwisher,
<xref rid="B62" ref-type="bibr">2000</xref>
). Hence, the pSTS and LOC regions appear to house hierarchically intermediate processing stages or channels for analyzing gross-level visual objects or object-like features by assimilating inputs from earlier areas that represent a variety of low-level visual attributes. This hierarchical processing may thus contribute to the segmentation of a distinct object, or objects, present within a complex visual scene (Felleman and van Essen,
<xref rid="B29" ref-type="bibr">1991</xref>
; Macevoy and Epstein,
<xref rid="B73" ref-type="bibr">2011</xref>
).</p>
<p>Parallel processing hierarchies are also known to exist in the primate auditory system (Rauschecker et al.,
<xref rid="B95" ref-type="bibr">1995</xref>
; Kaas et al.,
<xref rid="B57" ref-type="bibr">1999</xref>
). Primary auditory cortical regions (PACs) are known to have a critical role in auditory stream segregation and formation, clustering operations, and sound organization based on primitive acoustic features such as bandwidths, spectral shapes, onsets, and harmonic relationships (Medvedev et al.,
<xref rid="B81" ref-type="bibr">2002</xref>
; Nelken,
<xref rid="B86" ref-type="bibr">2004</xref>
; Kumar et al.,
<xref rid="B63" ref-type="bibr">2007</xref>
; Elhilali and Shamma,
<xref rid="B24" ref-type="bibr">2008</xref>
; Woods et al.,
<xref rid="B112" ref-type="bibr">2010</xref>
). The left and right planum temporale (PT) in humans, located posterior and lateral to Heschl's gyrus (HG), are thought to represent subsequent processing stages comprised of computational hubs that segregate spectro-temporal patterns associated with complex sounds, including processing of acoustic textures, location cues, and prelinguistic analysis of speech sounds (Griffiths and Warren,
<xref rid="B39" ref-type="bibr">2002</xref>
; Obleser et al.,
<xref rid="B89" ref-type="bibr">2007</xref>
; Overath et al.,
<xref rid="B90" ref-type="bibr">2010</xref>
). Subsequent cortical pathways are thought to integrate corresponding acoustic streams over longer time frames, including the posterior portions of the superior temporal gyri (STG) and sulci (STS), which represent processing stages more heavily involved in discriminating and recognizing acoustic events and real-world sounds (Maeder et al.,
<xref rid="B74" ref-type="bibr">2001</xref>
; Zatorre et al.,
<xref rid="B113" ref-type="bibr">2004</xref>
; Griffiths et al.,
<xref rid="B38" ref-type="bibr">2007</xref>
; Leech et al.,
<xref rid="B66" ref-type="bibr">2009</xref>
; Goll et al.,
<xref rid="B35" ref-type="bibr">2011</xref>
; Teki et al.,
<xref rid="B104" ref-type="bibr">2011</xref>
). Additionally, sounds containing vocalizations (human or animal) or strong harmonic content evoke activity along various bilateral STG pathways, which subsequently feed into regions that are relatively specialized for processing speech and/or prosodic information [Zatorre et al.,
<xref rid="B114" ref-type="bibr">1992</xref>
; Obleser et al.,
<xref rid="B88" ref-type="bibr">2008</xref>
; Lewis et al.,
<xref rid="B71" ref-type="bibr">2009</xref>
; Rauschecker and Scott,
<xref rid="B94" ref-type="bibr">2009</xref>
; Leaver and Rauschecker,
<xref rid="B65" ref-type="bibr">2010</xref>
; Talkington et al. (
<xref rid="B102" ref-type="bibr">in press</xref>
)].</p>
<p>Many of the above cortical mapping studies have been conducted using stimuli that capture the spectro-temporal characteristics of natural sounds in an effort to define mechanisms that abstract behaviorally meaningful events. However, given the broader multisensory and supramodal nature of object knowledge representations (Caramazza and Mahon,
<xref rid="B12" ref-type="bibr">2003</xref>
; Martin,
<xref rid="B77" ref-type="bibr">2007</xref>
; Lewis,
<xref rid="B67" ref-type="bibr">2010</xref>
), the concept of an “auditory object” is convenient for more generally addressing issues related to hearing perception and cognition. While its definition remains operational, one principle of auditory object processing is that auditory pattern analyses should allow for perceptual categorization and that auditory objects should be separable by perceptual boundaries (Griffiths and Warren,
<xref rid="B40" ref-type="bibr">2004</xref>
; Husain et al.,
<xref rid="B52" ref-type="bibr">2004</xref>
). However, beyond representations of components of speech and speech-like sounds, identifying other “bottom-up” acoustic signal attributes and perceptual dimensions that may be used for distinguishing between different real-world sound categories remain poorly understood.</p>
<p>In our earlier studies, we mapped brain regions that were responsive to four distinct semantic (“top-down”) categories of behaviorally relevant real-world
<italic>action</italic>
sounds (devoid of any vocalization content). This included two categories of biological (living) action sounds, human and animal sources, and two categories of non-biological (non-living) action sounds, mechanical, and environmental sources (Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
; Lewis et al.,
<xref rid="B70" ref-type="bibr">2011</xref>
). For the present study, we assumed that the five aforementioned conceptual categories of sound (vocalizations plus four action sound categories) may also be characterized by quantifiable acoustic features. Re-analyzing data from our earlier study (Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
), we focused on examining perceptual features and acoustic signal attributes of the non-biological action sound sources. This included automated machinery (actions perceived as not being directly associated with a human or agent instigating the action) and the natural environment (see Table
<xref ref-type="table" rid="TA1">A1</xref>
).</p>
<p>We restricted our analyses to non-biological action sounds because high-level acoustic features associated with biological action sounds can be strongly tied to motor and multisensory associations (for review see Lewis,
<xref rid="B67" ref-type="bibr">2010</xref>
). Meaningful biological action sounds may ultimately be processed along specialized pathways that extract or probabilistically compare their acoustic features with representations of the observer's own networks related to sound-producing motor actions (Rizzolatti et al.,
<xref rid="B98" ref-type="bibr">1998</xref>
; Kohler et al.,
<xref rid="B61" ref-type="bibr">2002</xref>
; Rizzolatti and Craighero,
<xref rid="B97" ref-type="bibr">2004</xref>
), evoking “embodied” representations (Barsalou,
<xref rid="B6" ref-type="bibr">2008</xref>
) and assessments of motor action intention (Aziz-Zadeh et al.,
<xref rid="B5" ref-type="bibr">2004</xref>
; Bidet-Caulet et al.,
<xref rid="B10" ref-type="bibr">2005</xref>
; Iacoboni et al.,
<xref rid="B53" ref-type="bibr">2005</xref>
; Gazzola et al.,
<xref rid="B34" ref-type="bibr">2006</xref>
; Lewis et al.,
<xref rid="B69" ref-type="bibr">2006</xref>
; Aglioti et al.,
<xref rid="B1" ref-type="bibr">2008</xref>
; de Lucia et al.,
<xref rid="B20" ref-type="bibr">2009</xref>
).</p>
<p>One salient feature of the mechanical and environmental sounds we previously examined was their wide range in spatial scale (Lewis et al.,
<xref rid="B70" ref-type="bibr">2011</xref>
). While there were exceptions, most of the mechanical sounds depicted discrete “object-like” things (e.g., clock, fax machine, laundry machine) while most of the environmental sounds depicted an acoustic scene on a large-scale relative to the size of the observer (e.g., wind, rain, ocean waves). This observation led us to question whether an object-like to scene-like perceptual continuum or boundary might be explicitly represented along intermediate processing stages of the human auditory system, analogous to the parallel hierarchical organizations reported for the visual system. Thus, our first objective was to test the hypothesis that the auditory system would house intermediate cortical processing stages or channels that are parametrically sensitive to signal attributes characteristic of object-like versus scene-like action sounds. We further hypothesized that any regions sensitive to object-like acoustic features would be located outside of earlier primary auditory cortices (PACs) yet prior to stages sensitive to different “conceptual-level” representations of real-world sound-source categories that we and others have previously reported.</p>
<p>Assuming that some cortical regions would show either parametric sensitivity or a sharp categorical boundary to object-like versus scene-like non-biological action sounds, a second objective of this study was to identify specific acoustic signal attributes that might quantitatively characterize this perceptual dimension. Environmental sounds have previously been modeled as distinguishable sound textures using relatively simple time-averaged statistics (McDermott and Simoncelli,
<xref rid="B80" ref-type="bibr">2011</xref>
). Additionally, quantitative characterizations using measures of spectral dynamics are reported to represent a possible scheme for categorizing natural sounds (Reddy et al.,
<xref rid="B96" ref-type="bibr">2009</xref>
). Thus, we further hypothesized that some of these relatively low-order signal attributes of our ecologically valid sound stimuli would show a parametric correlation with the perceptual ratings of object saliency and/or the activation of cortical regions sensitive to sounds rated more as object-like versus scene-like.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Participants</title>
<p>The functional magnetic resonance imaging (fMRI) data for this study draws from earlier publications (Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
; Lewis et al.,
<xref rid="B70" ref-type="bibr">2011</xref>
), which provide additional details of the sound stimuli, psychophysical attributes of the sounds, and imaging methods used. For the present study, we included neuroimaging results from 31 right-handed participants (19–36 years of age, 16 women). All participants were native English-speakers with no previous history of neurological or psychiatric disorders, or auditory impairment, and had a self-reported normal range of hearing. Informed consent was obtained for all participants following guidelines approved by the West Virginia University Institutional Review Board.</p>
</sec>
<sec>
<title>Sound stimulus creation and presentation</title>
<p>The sound stimuli were compiled from professionally recorded action sounds (Sound Ideas, Inc, Richmond Hill, ON, Canada) including 64 sounds in each of four conceptual categories of sound sources (human, animal, mechanical, and environmental). The mechanical and environmental sounds retained for primary analyses in the present study are included in Table
<xref ref-type="table" rid="TA1">A1</xref>
, and a complete list of the sounds is detailed in our earlier study (Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
). Sound stimuli were edited to 3.0 ± 0.5 s duration, matched for total root mean-squared (RMS) power, with 25 ms onset/offset ramps (Cool Edit Pro, Syntrillium Software Co., owned by Adobe). Sound stimuli were retained from one channel (mono, 44.1 kHz, 16-bit), and these single channel stimuli were used for acoustic signal processing analyses. For participants, monaural sounds were presented to both ears, which precluded the presence of binaural spatial cues, yet allowed the sounds to be heard more clearly. During fMRI scanning, high fidelity sound stimuli were presented using a Windows PC computer (Presentation software version 11.1, Neurobehavioral Systems Inc.) and delivered via MR compatible electrostatic ear buds (STAX SRS-005 Earspeaker system; Stax LTD., Gardena, CA) worn under sound attenuating ear muffs.</p>
</sec>
<sec>
<title>Scanning paradigms</title>
<p>Each scanning session consisted of eight separate functional imaging runs, across which the sound stimuli and silent events were presented in random order. Participants randomly assigned to Group A (
<italic>n</italic>
= 12) were instructed to press a response box button immediately at the offset of each sound stimulus (from Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
). They were unaware of the purposes of the study and had not heard these particular sound stimuli before. Participants in Group B (
<italic>n</italic>
= 19), also unfamiliar with the specific sound stimuli, were instructed to silently determine in their head (no overt responses) whether or not a human was directly involved with the production of the action sound (from Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
and Lewis et al.,
<xref rid="B70" ref-type="bibr">2011</xref>
). Based on post-scanning assessments by participants, we censored responses to 45 of the 256 sound stimuli
<italic>post-hoc</italic>
for all participant data-sets to be certain that the sounds fell clearly within a given category and were perceived to be devoid of any vocalization content. Brain responses to sounds that were incorrectly categorized, based on the individual's scanning responses (Group B) or post-scanning responses (Group A), were excluded from all analyses for that individual. Additionally, the mean entropy or spectral structure variation (SSV) measures could not be derived for some sound stimuli (see below), and responses to those sounds were excluded from all analyses.</p>
</sec>
<sec>
<title>Magnetic resonance imaging and data analysis</title>
<p>Scanning was completed on a 3 Tesla General Electric Horizon HD MRI scanner using a quadrature bird-cage head coil. We acquired whole-head, spiral in-and-out images of blood-oxygenated level dependent (BOLD) signals (Glover and Law,
<xref rid="B32a" ref-type="bibr">2001</xref>
) using a clustered-acquisition fMRI design. This allowed sound stimuli to be presented during silent periods (at a comfortable level between 80–83 dB C-weighted) without the presence of scanner noise (Edmister et al.,
<xref rid="B21a" ref-type="bibr">1999</xref>
; Hall et al.,
<xref rid="B46" ref-type="bibr">1999</xref>
). A sound or silent event occurred every 9.3 s. At 6.8 s after event onset BOLD signals were collected as 28 axial brain slices with 1.9 × 1.9 × 4 mm
<sup>3</sup>
spatial resolution (TR = 9.3 s, TE = 36 ms, OPTR = 2.3 s volume acquisition, FOV = 24 cm). In a subsequent imaging sequence, whole brain T1-weighted anatomical MR images were collected using a spoiled GRASS pulse sequence (SPGR, 1.2 mm slices with 0.94 × 0.94 mm
<sup>2</sup>
in-plane resolution).</p>
<p>Acquired data were analyzed using volumetric-based registration techniques with AFNI software (
<ext-link ext-link-type="uri" xlink:href="http://afni.nimh.nih.gov/">http://afni.nimh.nih.gov/</ext-link>
) and related plug-ins (Cox,
<xref rid="B16" ref-type="bibr">1996</xref>
). For each participant's data, the eight scans were concatenated into a single time series and brain volumes were corrected for baseline linear drift and for global head motion translations and rotations. BOLD signals were normalized to a percent signal change on a voxel-by-voxel basis relative to responses to the silent events that were presented randomly throughout each scanning run (Belin et al.,
<xref rid="B9" ref-type="bibr">1999</xref>
; Hall et al.,
<xref rid="B46" ref-type="bibr">1999</xref>
). Several multiple linear regression models (using 3dDeconvolve) identified voxels showing preferential activation related either to the Likert scale object-vs.-scene ratings of sounds, the category of sound, or parametric measures of acoustic signal attributes (addressed below). Regression coefficients were spatially low-pass filtered (4 mm box filter), and subjected to
<italic>t-test</italic>
and thresholding.</p>
<p>For whole-brain correction, we estimated the spatial structure of the noise in BOLD signal in voxels outside the brain (using AFNI plug-ins AlphaSim and 3dFWHMx) after the residuals left over from linear modeling fitting was subtracted from each voxel's time series. This yielded an estimated 2.0 × 2.1 × 3.4 mm
<sup>3</sup>
spatial smoothness in
<italic>x</italic>
,
<italic>y</italic>
, and
<italic>z</italic>
dimensions (full-width half-max Gaussian filter widths). Using the estimated 2.4 mm
<sup>3</sup>
spatial blur in brain voxels, together with a minimum cluster size of 20 voxels, and voxel-wise
<italic>p</italic>
-value of
<italic>p</italic>
< 0.05 yielded a whole-brain correction at α < 0.05. Anatomical and functional imaging data were transformed into standardized Talairach coordinate space (Talairach and Tournoux,
<xref rid="B97a" ref-type="bibr">1988</xref>
). Data were then projected onto the PALS atlas cortical surface models (in AFNI-tlrc) using Caret software (
<ext-link ext-link-type="uri" xlink:href="http://brainmap.wustl.edu">http://brainmap.wustl.edu</ext-link>
) (van Essen et al.,
<xref rid="B107" ref-type="bibr">2001</xref>
; van Essen,
<xref rid="B106" ref-type="bibr">2005</xref>
).</p>
</sec>
<sec>
<title>Acoustic signal attributes of mechanical and environmental sounds</title>
<p>The mechanical and environmental action sounds retained for analyses in the current study had been matched overall for low-level acoustic attributes including loudness (RMS intensity) and duration ranges. To assess changes in the spectro-temporal dynamics of the action sounds, we measured the mean entropy (Wiener entropy) in the acoustic signal (Tchernichovski et al.,
<xref rid="B103" ref-type="bibr">2001</xref>
) using freely available phonetic software (Praat,
<ext-link ext-link-type="uri" xlink:href="http://www.fon.hum.uva.nl/praat/">http://www.fon.hum.uva.nl/praat/</ext-link>
). We further derived the SSVs of the sounds (using Praat), which is a measure of changes in signal entropy over time that has been shown to have utility in categorizing natural sound signals (Reddy et al.,
<xref rid="B96" ref-type="bibr">2009</xref>
). The natural log of SSV measures provided a more widespread distribution of values relative to the Likert scale ratings, and thus we used ln(SSV) values for linear regression analyses. Both the entropy and ln(SSV) measures were
<italic>z</italic>
-normalized based on the mean and standard deviation of the entropy measures [(x−μ)/σ] of the retained mechanical and environmental sounds.</p>
</sec>
<sec>
<title>Perceptual attributes of sound stimuli</title>
<p>All of the 64 mechanical and 64 environmental sound stimuli were presented in random order to a group of participants (
<italic>n</italic>
= 18) not included in the fMRI scanning paradigms. They rated the sounds using a Likert scale (1–5) with written responses, assessing the degree to which they perceived the sound-source as a distinct object (low rating) versus part of an acoustic scene (high rating). As examples, they were instructed that hearing the hum of traffic when you are in a neighborhood that is near an interstate highway might be rated more as an acoustic scene (response 4 or 5), whereas hearing a stopwatch ticking might be perceived more as a distinct object (response 1 or 2). The ratings were averaged across the group (Figure
<xref ref-type="fig" rid="F1">1A</xref>
). Seven of the environmental sounds rated as object-like (Figures
<xref ref-type="fig" rid="F1">1A,B</xref>
) fell below the overall average Likert ratings of 3.08. Using this number of sounds as a threshold, we opted to identify cortical regions most sensitive to the object-vs-scene perceptual dimension by examining (1) seven extreme object-like environmental (EO7) sounds versus seven extreme scene-like mechanical (MS7) sounds, and conversely (2) cortical responses to the seven extreme object-like mechanical (MO7) sounds versus the seven extreme scene-like environmental (ES7) sounds (28 sounds total, see Table
<xref ref-type="table" rid="TA1">A1</xref>
bold text entries). To validate the reliability of the Likert ratings of the retained 54 mechanical and 57 environmental sounds (Table
<xref ref-type="table" rid="TA1">A1</xref>
) we calculated Cronbach's alpha scores (Cronbach,
<xref rid="B17" ref-type="bibr">1951</xref>
) using multivariate methods (JMP 9.0 software, SAS Institutes, Inc.). Including ratings of all 111 sounds (54 mechanical plus 57 environmental) by the entire set of 18 participants yielded a value of 0.9474. As a more conservative measure, including only the 28 most extreme object-like and scene-like sounds (mentioned above) yielded a value of α = 0.9784, and subsequent removal of each participant individually from the group data consistently produced values between 0.9763 and 0.9784, which were well above the accepted consistency score of 0.7 (Nunnally,
<xref rid="B87" ref-type="bibr">1978</xref>
).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Cortical sensitivity to the perception of auditory “objects” versus acoustic scenes, using real-world non-biological action sounds. (A)</bold>
Frequency of Likert ratings (1–5) of the Mechanical (M; blue,
<italic>n</italic>
= 54 sounds retained) and Environmental sound stimuli (E; green,
<italic>n</italic>
= 57). See Table
<xref ref-type="table" rid="TA1">A1</xref>
bolded entries for a list of these sounds.
<bold>(B)</bold>
Power spectra of the 28 action sounds with the most extreme object-vs-scene ratings in each conceptual category of action sound (refer to color key).
<bold>(C)</bold>
Volume-based group-averaged activation common to both Groups A and B (conjunction analyses; yellow with black outlines) that showed preferential activation to sounds judged to be object-like (MO7 and EO7) versus scene-like (MS7 and ES7). Cortical responses to the same sounds were used to define regions preferential for mechanical (blue) versus environmental (green) sounds. Transparent white patches in the left hemisphere depict an overlapping “heat map” of tonotopically organized regions (disregarding orientation of the tonotopic gradient) derived from eight individuals. STS = superior temporal sulcus.
<bold>(D)</bold>
Charts illustrating the BOLD percent signal change response profiles as a function of Likert scale rating for both Groups (refer to color key). Blue squares depict mechanical sounds and green circles depict environmental sounds. The group-averaged BOLD percent signal change responses to the human action sounds (red diamonds; left STG 0.62% BOLD signal differential, right 0.73%) and animal action sounds (yellow triangle; left 0.61%, right 0.72%) are also depicted for comparison.
<bold>(E)</bold>
Charts separately illustrating BOLD responses to environmental and mechanical action sounds as a function of Likert scale ratings. Refer to text for other details.</p>
</caption>
<graphic xlink:href="fnsys-06-00027-g0001"></graphic>
</fig>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>In our earlier studies examining these same data we reported that the medial two-thirds of HG, the approximate location of PACs, were strongly activated by both the mechanical and environmental sound stimuli; there was no differential activation to these different conceptual categories of sound in these regions (Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
; Lewis et al.,
<xref rid="B70" ref-type="bibr">2011</xref>
). Rather, mechanical action sounds preferentially activated the bilateral anterior superior temporal gyri (aSTG) and parahippocampal regions, while environmental action sounds preferentially activated bilateral medial prefrontal cortices, precuneus, retrosplenial cortex, and the right hemisphere visual motion processing area hMT/V5 (Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
; Lewis et al.,
<xref rid="B70" ref-type="bibr">2011</xref>
). For the present study, we examined cortical responses to the same mechanical and environmental sound stimuli but “re-grouped” them according to their perceptual ratings along a putative continuum of object-like to scene-like; psychophysical ratings of the mechanical and environmental sounds were derived from non-imaging listeners (
<italic>n</italic>
= 18) who rated the sounds on a Likert scale (Figure
<xref ref-type="fig" rid="F1">1A</xref>
; range 1 = object-like to 5 = scene-like; refer to Methods).</p>
<p>To assess extremes in response to the object-like versus scene-like sounds, we charted the power spectra of the 28 most extreme-rated sounds for each category (Figure
<xref ref-type="fig" rid="F1">1B</xref>
; seven in each subset, see Methods). Inspection of these spectra revealed greater roughness of the contours for the sounds rated as more object-like and smoother contours for the sounds rated as more scene-like. We averaged the power spectra of each of these four subsets of sound (not shown) and fit them with a logarithmic function (
<italic>y</italic>
=
<italic>a</italic>
× ln(
<italic>x</italic>
) +
<italic>b</italic>
). This revealed a systematic increase in the amplitude of the slope of the exponential fit with increasing scene-like ratings (Figure
<xref ref-type="fig" rid="F1">1B</xref>
, the value of “
<italic>a</italic>
” shown in parentheses). These power spectrum features are addressed later in the context of signal attribute processing (see Discussion).</p>
<p>We mapped regions showing significantly preferential activity to the 28 action sounds that were rated at the extremes of the object-to-scene perceptual dimension. Our first analysis entailed a conjunction contrasting (1) the seven mechanical action sounds (Table
<xref ref-type="table" rid="TA1">A1</xref>
) rated as being the most object-like (Likert rating range of 1.1–1.4; dark blue traces in Figure
<xref ref-type="fig" rid="F1">1B</xref>
) versus the seven environmental sounds that were most scene-like (range 4.5–4.7; dark green), together with (2) regions sensitive to the seven environmental sounds that were most object-like (range 1.9–2.8; light green) versus the seven mechanical sounds that were rated as most scene-like (range 3.6–4.5; light blue). Thus, for the fMRI participants the cortical responses to sounds generally judged as being object-like versus scene-like were balanced for correctly categorized sound source stimuli, mechanical, or environmental.</p>
<p>The above fMRI analysis had been conducted for two different groups of listeners: Group A participants (
<italic>n</italic>
= 12) pressed a button as quickly as possible immediately at the end of each sound, and Group B participants (
<italic>n</italic>
= 19) silently responded in their head whether or not the sound was directly produced by a human (no overt responses). Both groups of listeners revealed significant bilateral activation along the STG that was preferential for sounds perceived as object-like as opposed to scene-like, independent of the category of sound (data not shown). Consequently, we combined those data-sets using a second conjunction analysis to reveal activation foci common to both Groups A and B (Figure
<xref ref-type="fig" rid="F1">1C</xref>
, yellow with black outlines), which provided a more conservative localization of cortical regions showing sensitivity to object-like sounds, independent of sound category and listening task.</p>
<p>These auditory object-sensitive STG foci (Talairach coordinates: left STG
<italic>x</italic>
= −54,
<italic>y</italic>
= −12,
<italic>z</italic>
= 1, volume = 148 μl; right STG 54, −21, 7, 783 μl) fell well outside of the estimated locations of primary auditory cortices (PACs), which are typically located along the medial two-thirds of HG (Figure
<xref ref-type="fig" rid="F1">1C</xref>
, right hemisphere dotted white line) (Morosan et al.,
<xref rid="B84" ref-type="bibr">2001</xref>
; Rademacher et al.,
<xref rid="B92" ref-type="bibr">2001</xref>
). We additionally charted the functionally estimated locations of PACs of eight participants incorporating results from our earlier frequency-dependent response (“tonotopy”) mapping studies (Figure
<xref ref-type="fig" rid="F1">1C</xref>
, left hemisphere white heat map) using the same MRI scanner and same basic clustered acquisition fMRI design [Lewis et al.,
<xref rid="B71" ref-type="bibr">2009</xref>
; Talkington et al. (
<xref rid="B102" ref-type="bibr">in press</xref>
)]. This further indicated that the STG foci were outside of primary auditory cortices, which were functionally defined here as contiguous stretches of cortex that were differentially responsive to high, medium, and low frequency pure tones and band pass noises.</p>
<p>We also charted cortex preferential for the 14 mechanical versus 14 environmental action sounds (from Figure
<xref ref-type="fig" rid="F1">1B</xref>
), which revealed regions more sensitive to category membership at a conceptual level (Figure
<xref ref-type="fig" rid="F1">1C</xref>
, blue versus green regions). While the 14 mechanical sounds were overall more object-like than the 14 environmental sounds, there nonetheless was a double dissociation that supported our earlier finding. In particular, the anterior portions of the left and right STG (aSTG) were preferentially activated by the mechanical action sounds, and the hMT/V5 region, among other cortices, were preferentially activated by the environmental action sounds. Thus, the STG foci sensitive to sounds rated more as object-like (yellow) were in locations distinct from many of the regions that were preferential for environmental (green) or mechanical (blue) action sounds at a categorical level. While this 2 × 2 analysis design was inherently non-orthogonal (using the same four subsets of sound), both the anatomical and functional placement of the bilateral STG preferential for object-like qualities were consistent with representing intermediate processing stages within the cortical networks subserving hearing perception (see Discussion).</p>
<p>Using the STG foci as regions of interest, we next charted the averaged BOLD signal response (across all subjects;
<italic>n</italic>
= 31) relative to the Likert scale rating of each sound (Figure
<xref ref-type="fig" rid="F1">1D</xref>
). These results further indicated that a roughly
<italic>linear</italic>
parametric correlation with the left and right STG activation existed, which was greater for object-like sounds and lower for scene-like sounds for both Group A (right STG yielded
<italic>R</italic>
= −0.478, Steiger's
<italic>Z</italic>
-test 111 df,
<italic>Z</italic>
= 3.72,
<italic>p</italic>
< 0.01; left STG
<italic>R</italic>
= −0.318,
<italic>p</italic>
< 0.01) and Group B listeners (right STG
<italic>R</italic>
= −0.436,
<italic>p</italic>
< 0.01; left STG
<italic>R</italic>
= −0.400,
<italic>p</italic>
< 0.01). This correlation with object-like Likert ratings persisted separately for both mechanical and environmental sound categories (Figure
<xref ref-type="fig" rid="F1">1E</xref>
), in both the left STG (Environmental sounds,
<italic>R</italic>
= −0.47,
<italic>p</italic>
< 0.01; Mechanical sounds
<italic>R</italic>
= −0.41,
<italic>p</italic>
< 0.01) and right STG (Environmental sounds,
<italic>R</italic>
= −0.33,
<italic>p</italic>
< 0.05; Mechanical sounds
<italic>R</italic>
= −0.36,
<italic>p</italic>
< 0.01).</p>
<p>We further assessed cortical activation showing differential BOLD signal in response to the remaining four pairings of four extreme-rated sound groups along the object-to-scene continuum (i.e., Figure
<xref ref-type="fig" rid="F1">1B</xref>
pairs MO7vsEO7, MO7vsMS7, EO7vsES7, and MS7vsES7): For both Groups A and B, these pair-wise comparisons consistently resulted in activation that was either significantly preferential for the more object-like subset of sounds or at least trended toward significance within or near the bilateral STG (data not shown). These differential activation contrasts were generally stronger and more expansive for Group B, who performed a task that required sound categorization. Thus, while the bilateral STG (Figure
<xref ref-type="fig" rid="F1">1C</xref>
) were significantly more responsive to sounds rated as more object-like for both of our listening tasks, task demands could modulate the relative degree and cortical expanse of activation associated with processing auditory object salience.</p>
<p>Group A participants, who performed a non-categorization task (pressing a button at the end of each sound), revealed a double-dissociation of networks sensitive to object-like versus scene-like action sounds (Figure
<xref ref-type="fig" rid="F2">2</xref>
, yellow vs. brown;
<italic>n</italic>
= 12, α < 0.05, corrected). Relative to hearing silent events, the scene-like sounds with this task preferentially activated bilateral anterior cingulate (TLRC
<italic>x</italic>
= 0.5,
<italic>y</italic>
= 41,
<italic>z</italic>
= 6, 643 μl), mid-cingulate (2, –24, 29; 800 μl), and precuneus cortices (2, −49, 40; 1219 μl) for both the mechanical and environmental sounds (Figure
<xref ref-type="fig" rid="F2">2</xref>
, light blue and dark green histograms). This double-dissociation did not meet statistical significance in these or any other brain region for Group B (see histograms), who performed the task of indicating if the sounds were directly produced by a human or not—correctly indicating “not” for both the mechanical and environmental sounds based on post-scan testing. Thus, preferential activation to sounds rated as scene-like, in contrast to object-like, depended heavily on task demands.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>A double-dissociation of networks preferential for processing sounds perceived more as auditory objects (yellow) versus acoustic scenes (brown) during the sound offset detection task (Group A,
<italic>n</italic>
= 12; α < 0.05, corrected).</bold>
Histograms show activation profiles (normalized relative to responses to silent events) for participants from both Group A (
<italic>n</italic>
= 12; left-most charts) and B (
<italic>n</italic>
= 19; right).</p>
</caption>
<graphic xlink:href="fnsys-06-00027-g0002"></graphic>
</fig>
<p>We next sought to identify quantifiable acoustic signal attributes that might correlate with the perception of object-like versus scene-like sound stimuli (Likert ratings) and/or the cortical response profiles of the STG foci depicted in Figure
<xref ref-type="fig" rid="F1">1C</xref>
. Both the mechanical and environmental action sounds had been matched in loudness and duration, and binaural spatial cues had been removed from all sound stimuli. Qualitatively, our selection of scene-like sounds tended to be more homogeneous in acoustic temporal structure over time (e.g., the whooshing of wind, or slow droning sound of rainfall) and were characterized by relatively smoother 1/
<italic>f</italic>
<sup>α</sup>
structure in their power spectra (see Figure
<xref ref-type="fig" rid="F1">1B</xref>
), where
<italic>f</italic>
= frequency and α ranges from 1 to 2. Inspired by earlier studies, we sought to quantify aspects of these signal features by deriving measures of both mean spectral entropy and changes in entropy dynamics over time (Reddy et al.,
<xref rid="B96" ref-type="bibr">2009</xref>
). Measures of the mean entropy (Figure
<xref ref-type="fig" rid="F3">3A</xref>
) showed no correlation with the object-like versus scene-like perceptual ratings of the mechanical or environmental sounds. However, changes in entropy over time, quantified by SSV measures, did reveal a significant relationship with the object-to-scene perceptual dimension; this relationship held for both categories of sound when examining all sounds within each category (Figure
<xref ref-type="fig" rid="F3">3B</xref>
; environmental sounds
<italic>R</italic>
= −0.476,
<italic>p</italic>
< 0.01; mechanical sounds
<italic>R</italic>
= −0.469,
<italic>p</italic>
< 0.01) or just the 28 extreme-rated sounds (Figure
<xref ref-type="fig" rid="F3">3C</xref>
;
<italic>R</italic>
= −0.622,
<italic>p</italic>
< 0.02). Further quantification and approaches for assessing the 1/
<italic>f</italic>
<sup>α</sup>
signal attributes, or “roughness” distributions (Antal et al.,
<xref rid="B3" ref-type="bibr">2002</xref>
), were beyond the scope of the present study.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Correlations between acoustic signal attributes and perceptual ratings of object-vs-scene non-biological action sounds. (A)</bold>
Mean entropy measures (
<italic>z</italic>
-normalized) showed no significant linear correlations between the sound stimuli as a function of the Likert ratings.
<bold>(B)</bold>
Spectral structure variation (SSV) measures (ln(SSV),
<italic>z</italic>
-normalized) of the sounds as a function of Likert ratings did revealed significant correlations for both the mechanical (blue) and environmental (green) sounds.
<bold>(C)</bold>
Chart derived from panel B showing only the set of 28 extreme rated sounds from Figure
<xref ref-type="fig" rid="F1">1B</xref>
. See text for other details.</p>
</caption>
<graphic xlink:href="fnsys-06-00027-g0003"></graphic>
</fig>
<p>Based on the correlations between object-to-scene Likert ratings with SSV signal attributes, we re-analyzed the fMRI data for both Groups A and B testing for regions showing parametric linear sensitivity to SSV of the 54 mechanical and 57 environmental sounds. This parametric fMRI analysis (initially combining data from both groups based on the rationale described for Figure
<xref ref-type="fig" rid="F1">1C</xref>
) revealed bilateral SSV-sensitive regions (Figure
<xref ref-type="fig" rid="F4">4A</xref>
, red;
<italic>p</italic>
< 0.00001, corrected) along large expanses of the superior temporal plane and STG, and this overlapped with the ROIs sensitive to object-like sounds (yellow with black outlines). The right STG focus preferential for object-like sounds showed a significant correlation of increasing activation with increasing SSV measures for both the environmental and mechanical sounds (Figure
<xref ref-type="fig" rid="F4">4B</xref>
; environmental
<italic>R</italic>
= +0.592,
<italic>p</italic>
< 0.01 two-tailed; mechanical
<italic>R</italic>
= +0.501,
<italic>p</italic>
< 0.01), while the left STG showed SSV-sensitivity to the environmental sounds (
<italic>R</italic>
= +0.417,
<italic>p</italic>
< 0.05), but only a trend toward SSV-sensitivity for the mechanical sounds. Separately, Group A and B showed a very similar fMRI BOLD response profile to SSV (not shown) for both the environmental action sounds (right STG: Group A, slope = 0.1352,
<italic>R</italic>
= +0.468,
<italic>p</italic>
< 0.02; Group B, slope = 0.1589,
<italic>R</italic>
= +0.588,
<italic>p</italic>
< 0.01) and mechanical action sounds (Group A,
<italic>R</italic>
= 0.390,
<italic>p</italic>
< 0.05; Group B,
<italic>R</italic>
= 0.469,
<italic>p</italic>
< 0.02). Thus, task factors did not significantly affect the correlations between SSV measures and the BOLD fMRI responses within the bilateral STG foci.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>(A)</bold>
Location of object-vs-scene sensitive cortices (yellow from Figure
<xref ref-type="fig" rid="F1">1C</xref>
) relative to regions showing parametric sensitivity to ln(SSV) at
<italic>p</italic>
< 0.00001 (red) and mean entropy at
<italic>p</italic>
< 0.0001 (purple). Charts show average BOLD signal responses from within the left and right STG foci (
<italic>n</italic>
= 31 subjects) relative to
<bold>(B)</bold>
ln(SSV) values,
<bold>(C)</bold>
mean entropy, and
<bold>(D)</bold>
global HNR values. ns = not significant. Refer to text for other details.</p>
</caption>
<graphic xlink:href="fnsys-06-00027-g0004"></graphic>
</fig>
<p>Parametric sensitivity to mean entropy (Figure
<xref ref-type="fig" rid="F4">4A</xref>
, purple;
<italic>p</italic>
< 0.0001, corrected) was also evident along the bilateral STG (left: −53, −6, 5, 567 μl, and right: 50, 3, −5 and 60, −13, 2, 3326 μl combined volume). These foci showed partial overlap with regions identified as being sensitive to object-like sounds (Figure
<xref ref-type="fig" rid="F4">4A</xref>
, overlap colors). The right STG foci sensitive to more object-like sounds (yellow with black outlines) showed a significant linear parametric decrease in activation with increasing mean entropy measures of the environmental sounds (Figure
<xref ref-type="fig" rid="F4">4C</xref>
;
<italic>R</italic>
= −0.472,
<italic>p</italic>
< 0.01), but this did not reach statistical significance for the mechanical action sounds. This result with the environmental sounds held separately for both Groups A (right STG,
<italic>R</italic>
= −0.467, df = 57,
<italic>p</italic>
< 0.02) and Group B (
<italic>R</italic>
= −0.376,
<italic>p</italic>
< 0.05). Thus, the different task demands did not have a strong effect on this basic finding.</p>
<p>We previously assessed human cortex for parametric sensitivity to a harmonics-to-noise ratio (HNR) of vocalizations and artificially constructed sounds, which revealed sensitivity to harmonic content along portions of the bilateral STG (Lewis et al.,
<xref rid="B71" ref-type="bibr">2009</xref>
). The harmonic content of the 54 mechanical action sounds (average = 2.22 ± 4.84 dB HNR; mean plus standard deviation) and 57 environmental sounds (0.23 ± 4.23 dB HNR) did reveal significant differences from one another [
<italic>t-test</italic>
(109) = −2.31;
<italic>p</italic>
= 0.023 two-tail]. The non-biological action sounds we examined were substantially lower in HNR measures than typical vocalization sounds (roughly +4 to +20 dB HNR), thereby precluding a systematic, objective comparison between vocalizations and action sounds. Nonetheless, within the right STG focus for object-like sounds there was a significant correlation of increasing activation with increasing HNR values of the environmental action sounds (Figure
<xref ref-type="fig" rid="F4">4D</xref>
).</p>
<p>In sum, a variety of relatively low-level signal attributes (SSV, entropy, and HNR) of real-world sounds showed parametric correlations of cortical activity along various portions of the bilateral STG. Within the STG foci sensitive to object-like perceptual judgments (Figure
<xref ref-type="fig" rid="F4">4A</xref>
, yellow), the right hemisphere foci showed a bias for stronger parametric sensitivity to these attributes. Moreover, the SSV measures of our ecologically valid sound stimuli showed a robust correlation with both perceptual ratings along an object-to-scene continuum (Figure
<xref ref-type="fig" rid="F3">3C</xref>
) as well as with cortical activation profiles of the left and right STG (Figures
<xref ref-type="fig" rid="F1">1C</xref>
,
<xref ref-type="fig" rid="F4">4A</xref>
) that were preferentially activated by sounds rated as more object-like.</p>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>The findings of the present study supported our hypothesis that intermediate stages of auditory cortex are sensitive to an object-like versus scene-like perceptual dimension of real-world non-biological action sounds. In particular, bilateral STG regions showed increasing parametric sensitivity to action sounds judged as being increasingly more object-like in quality. This parametric activation persisted both for mechanical and environmental sound sources and was independent of listening task. Conversely, cortical regions preferentially activated by scene-like sounds showed dependence on the listening task. This suggested that a double-dissociation of cortical networks representing the perceptual dimension of scene-like to object-like sounds may exist, but depends heavily on top-down task demands rather than solely on bottom-up acoustic signal features inherent to these sounds. An analysis of SSV measures of the object-to-scene perceptual continuum further demonstrated that the bilateral STG regions were parametrically sensitive to quantifiable measures related to acoustic signal entropy. This finding suggests that the STG regions may serve as a general-purpose channel or hub for extracting a number of relatively low-order signal attributes that may alert the auditory system to the presence of a distinct acoustic event, sound source, or “auditory object” emerging from the listener's ambient acoustic background. Collectively, these results are addressed below in the context of hierarchical processing stages of the auditory system, acoustic scene processing networks, and analogies to visual object processing stages in cortex.</p>
<sec>
<title>Hierarchical processing stages of the auditory system</title>
<p>The primary auditory cortices and immediately surrounding regions (e.g., PT) were comparably activated by all of our action sound stimuli (effectively subtracted out in our contrasts, cf. Figures
<xref ref-type="fig" rid="F1">1</xref>
,
<xref ref-type="fig" rid="F2">2</xref>
); there was no differential activation in these early cortical processing stage regions, neither for the perceptual dimension of object-like versus scene-like sounds nor at a conceptual category level for mechanical versus environmental sound sources. This may partially be a result of either ceiling level BOLD measurement effects, the use of relatively long duration stimuli (∼3 s), and/or the timing parameters of our fMRI clustered acquisition paradigm. Nonetheless, the results of the present study were consistent with the idea that the PACs and PT represent earlier hierarchical cortical processing stages (see Introduction). Both of these earlier stages may have been performing comparable degrees of processing operations on our mechanical and environmental action sounds, which across categories contained many complex spectro-temporal features and were matched overall for duration and intensity.</p>
<p>Beyond the PACs and PT, the bilateral STG region's preference for the object-like non-biological action sounds were consistent with depicting higher-order intermediate processing stages. This was due in part to their location, reported circuitry, and response latencies both in non-human primates (Rauschecker et al.,
<xref rid="B95" ref-type="bibr">1995</xref>
; Kaas and Hackett,
<xref rid="B56" ref-type="bibr">1998</xref>
; Kaas et al.,
<xref rid="B57" ref-type="bibr">1999</xref>
; Rauschecker and Scott,
<xref rid="B94" ref-type="bibr">2009</xref>
) and humans (Howard et al.,
<xref rid="B49" ref-type="bibr">2000</xref>
; Woods et al.,
<xref rid="B112" ref-type="bibr">2010</xref>
). Additionally, the fMRI activation profiles of the STG foci correlated parametrically with quantifiable acoustic signal features, suggestive of bottom-up influences that may be predominantly associated with auditory (as opposed to multisensory or amodal) processing. Although we did not directly manipulate attentional demands in this study, Group B listeners (who performed a categorization task) versus Group A listeners (who performed an end-of-sound task) did show differences in the expanse and/or relative amplitude of BOLD signal levels in the STG (e.g., Figure
<xref ref-type="fig" rid="F2">2</xref>
). Hence, the STG were modulated by task demands, consistent with hierarchical placement at intermediate stages of the auditory system (Fritz et al.,
<xref rid="B33" ref-type="bibr">2007a</xref>
,
<xref rid="B32" ref-type="bibr">b</xref>
).</p>
<p>The bilateral STG foci for object-like sounds appeared to represent stages prior to those sensitive to more conceptual-level category network representations. While our analysis examining the 28 extreme-rated sounds for both conceptual category membership and object-vs-scene qualities were not fully independent dimensions, the results nonetheless were consistent with our earlier reports using the full range of action sounds. In particular, portions of the cortical foci located further anterior along the STG (aSTG), plus parahippocampal regions, were preferentially activated by mechanical action sounds relative not only to the environmental sounds (mostly scene-like sounds) but also relative to the object-like human and animal action sound categories (Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
; Lewis et al.,
<xref rid="B70" ref-type="bibr">2011</xref>
). Additionally, as a conceptual-level category, environmental sounds activated various midline cortical regions plus the bilateral visual motion processing areas hMT/V5 (Engel et al.,
<xref rid="B25" ref-type="bibr">2009</xref>
; Lewis et al.,
<xref rid="B70" ref-type="bibr">2011</xref>
). Other studies have reported involvement of the parietal cortices in auditory object detection and segmentation (Cusack,
<xref rid="B18" ref-type="bibr">2005</xref>
; Dykstra et al.,
<xref rid="B23" ref-type="bibr">2011</xref>
; Teki et al.,
<xref rid="B104" ref-type="bibr">2011</xref>
). Collectively, these findings are consistent with the emerging idea that regions outside the conventional auditory system play a significant role in hearing perception germane to non-vocal action sounds (Lewis et al.,
<xref rid="B72" ref-type="bibr">2004</xref>
). The present results did not address the temporal dynamics of when object-like versus scene-like signal processing was taking place in the aforementioned cortical stages (hierarchically or in parallel). Nonetheless, the above results were consistent with placing the object-like sensitive STG foci at a hierarchically intermediate cortical stage of sound processing in the broader context of multimodal and cognitive networks subserving real-world auditory object recognition and identification. These findings provide new insights regarding how the mammalian auditory system may become organized to efficiently detect a given complex sound stream (an object-like sound) and permit it to pop out from an acoustic background scene, including complex scenes that may be composed of multiple “auditory objects” or sound sources, as addressed next.</p>
</sec>
<sec>
<title>Acoustic scene processing</title>
<p>An important role of the properly functioning auditory system is to dynamically filter out the drone of “uninteresting” background acoustic noise (Bregman,
<xref rid="B11" ref-type="bibr">1990</xref>
). While the scene-like and object-like sound stimuli we used were matched overall in loudness, duration, and spatial location, only the scene-like sounds revealed preferential activation of cortical foci along the midline structures, and only for one of our listening task conditions (Figure
<xref ref-type="fig" rid="F2">2</xref>
). Based on ablation studies, one interpretation of these findings is that the activation of the midline cortices may have been related to monitoring sensory events relative to the listener's own behavior for purposes of spatial orientation and memory (Vogt et al.,
<xref rid="B108" ref-type="bibr">1992</xref>
). A related possibility is that down-stream imagery and retrieval of episodic memories related to the acoustic scene may have preferentially led to activation of these midline regions (Hassabis et al.,
<xref rid="B47" ref-type="bibr">2007</xref>
). However, it remains unclear how these interpretations would fully account for the strong modulations we observed due to task demands (indicating end of sound versus indicating if the sound was produced by a human).</p>
<p>An alternative or additional possibility is that the activation profile we observed for scene-like versus object-like sounds along cortical midline structures was related to “default mode” network processing (Raichle et al.,
<xref rid="B93" ref-type="bibr">2001</xref>
; Greicius et al.,
<xref rid="B36" ref-type="bibr">2003</xref>
; Fransson and Marrelec,
<xref rid="B30" ref-type="bibr">2008</xref>
). Acoustic scenes, which may be comprised of one or multiple sound textures (e.g., a ventilation and heating system, or sounds of rain and wind heard amidst a forest) often convey sensory information that the auditory system may dynamically and adaptively “filter out” or represent as background acoustic context (Maeder et al.,
<xref rid="B74" ref-type="bibr">2001</xref>
; Gygi et al.,
<xref rid="B45" ref-type="bibr">2004</xref>
; Overath et al.,
<xref rid="B90" ref-type="bibr">2010</xref>
), thereby freeing up attentional resources for other sensory or cognitive processes. This could include freeing up “default mode” processing that becomes suspended during specific goal-directed tasks.</p>
<p>In contrast to the object-like sounds, the scene-like mechanical and environmental sounds of the present study were characterized by relatively smoother 1/
<italic>f</italic>
<sup>α</sup>
functions (Figure
<xref ref-type="fig" rid="F1">1B</xref>
), consistent with earlier reports (Voss and Clarke,
<xref rid="B109" ref-type="bibr">1975</xref>
; Attias and Schreiner,
<xref rid="B4" ref-type="bibr">1997</xref>
). As the distance between an observer and sound-source (or sources) increases there is a greater filtering of the sound pressure waves such that amplitude modulations in the acoustic signal becomes smoother. Perceptually, sound producing actions that are located further away from an observer's focus of attention are arguably more likely to represent events that can be relegated as sensory “background.” Thus, sounds with relatively smoother 1/
<italic>f</italic>
<sup>α</sup>
(among other attributes) are probabilistically more likely to be judged as scene-like, as opposed to object-like, even though the same sound-source may be judged as object-like when it is very close to the observer and/or when attention is directed to it.</p>
<p>The bilateral STG foci for object-like sounds were also significantly activated by the scene-like sounds relative to silent events, and the degree of activation exhibited a trend toward greater activation during a listening task that required sound categorization (human or not; i.e., Figure
<xref ref-type="fig" rid="F2">2</xref>
, Group B vs. A STG histograms). This response profile was consistent with the view that auditory scene analysis is a dynamic process that optimizes its representations of sound input depending on task demands (Hughes et al.,
<xref rid="B51" ref-type="bibr">2001</xref>
; Fritz et al.,
<xref rid="B33" ref-type="bibr">2007a</xref>
,
<xref rid="B32" ref-type="bibr">b</xref>
). Hence, the bilateral STG may be under top-down attentional control to channel specific acoustic features (such as those reflected by SSV, mean entropy, HNR, and other measures related to 1/
<italic>f</italic>
<sup>α</sup>
profiles) as a means for directing attention to particular types or categories of anticipated sound (auditory objects or acoustic background scenes) based on past listening experiences. In the absence of an explicit sound categorization task, incoming signal input with scene-like signal attributes (e.g., relatively low SSV, spectral flatness, smooth 1/
<italic>f</italic>
<sup>α</sup>
profile) may be processed in a manner that more rapidly leads to acoustic accommodation, which in turn serves to recalibrate the listener to a new ambient noise “background.” Listening for sounds with the goal of categorizing them (i.e., Group B) may have led to decreased activation of default mode networks regardless of the sound category, and possibly regardless of whether or not a sound was even presented (i.e., hearing a “silent event” when
<italic>anticipating</italic>
a sound stimulus). Conversely, the relatively simpler task of determining the sound offset (i.e., Group A) may have permitted a relatively greater degree of activity related to default mode processing when hearing the scene-like sounds (Figure
<xref ref-type="fig" rid="F2">2</xref>
, brown regions). Given these interpretations, activation of the midline structures seems unlikely to be directly related to the processing of acoustic signals
<italic>per se</italic>
.</p>
</sec>
<sec>
<title>Analogies between visual and auditory object processing</title>
<p>In the visual system, objects may be segregated from a background scene based on a number of different and converging features, including object motion, self-motion cues (head and eye movements), borders, textures, colors, etc. (Malach et al.,
<xref rid="B76" ref-type="bibr">1995</xref>
; Grill-Spector et al.,
<xref rid="B43" ref-type="bibr">1998</xref>
; Macevoy and Epstein,
<xref rid="B73" ref-type="bibr">2011</xref>
). For the auditory system, action sounds necessarily imply the presence of some form of dynamic motion, ostensibly leading to the production of the sound pressure waves, whether or not those action sources can also be viewed. Thus, from a more general perspective of sensory processing, the ability to extract salient physical attributes such as changes in signal energy or entropy likely represents an efficient and common neuro-computational means for representing the presence of distinct objects and meaningful events in the environment. While direct comparisons with the visual system are not always straight forward (King and Nelken,
<xref rid="B60" ref-type="bibr">2009</xref>
), some potential common principles in signal processing were revealed by the present study.</p>
<p>One signal processing computation that may generalize across sensory systems is time averaged mean entropy measures. Somewhat surprisingly, the mean entropy measures of environmental sounds, which showed no correlation with object-vs-scene Likert ratings (Figure
<xref ref-type="fig" rid="F3">3A</xref>
), did show a significant parametric correlation with activity in portions of the bilateral STG cortices, including the right hemisphere object-sensitive STG region. We speculate that these attributes may correlate with other perceptual dimensions, including judgments that emphasize discrimination of acoustic “textures” (Reddy et al.,
<xref rid="B96" ref-type="bibr">2009</xref>
; Overath et al.,
<xref rid="B90" ref-type="bibr">2010</xref>
; McDermott and Simoncelli,
<xref rid="B80" ref-type="bibr">2011</xref>
), as opposed to other features such as object size or object-motion attributes. Sound and visual texture perception has been proposed to involve similar types of signal attribute computations in cortex (Warren et al.,
<xref rid="B111" ref-type="bibr">1972</xref>
; Julesz,
<xref rid="B55" ref-type="bibr">1980</xref>
; Cusack and Carlyon,
<xref rid="B19" ref-type="bibr">2003</xref>
; McDermott and Oxenham,
<xref rid="B79" ref-type="bibr">2008</xref>
; Sathian et al.,
<xref rid="B101" ref-type="bibr">2011</xref>
). Together with the above studies, the present results are consistent with implicating entropy measures as one neuro-computational signal attribute that could be used to help segment, stream, or define objects (auditory, visual, or tactile) as distinct from other objects and from ambient background scenes.</p>
<p>Another potential analogy between auditory and visual processing strategies relates to “stationary” motion cues. The visual system includes pathways for processing first-order attributes, such as local luminance changes or changes in motion direction, as well as more subtle second- or third-order motion cues (e.g., contrast or spatial frequency deviations from the background, isoluminant chromatic motion), which are thought to rely on separate pathways (Chubb and Sperling,
<xref rid="B15" ref-type="bibr">1988</xref>
; Cavanagh,
<xref rid="B13" ref-type="bibr">1992</xref>
; Huddleston et al.,
<xref rid="B50" ref-type="bibr">2008</xref>
). In the auditory system, earlier neuroimaging studies demonstrated that sound motion processing, including explicit interaural intensity or time differences robustly activate primary auditory cortices (Griffiths et al.,
<xref rid="B37" ref-type="bibr">1994</xref>
; Mäkelä and McEvoy,
<xref rid="B75" ref-type="bibr">1996</xref>
; Murray et al.,
<xref rid="B85" ref-type="bibr">1998</xref>
; Baumgart et al.,
<xref rid="B7" ref-type="bibr">1999</xref>
; Lewis et al.,
<xref rid="B68" ref-type="bibr">2000</xref>
; Warren et al.,
<xref rid="B110" ref-type="bibr">2002</xref>
). In our action sound stimuli, binaural spatial cues were entirely absent, and acoustic motion information depicting spatial excursions were not prevalent, with the exception of a few sounds containing motion-in-depth cues (looming or receding). Thus, we speculate that the measure of SSV in our collection of real-world sounds may be comparable to second- or third-order motion cues that are predominantly processed at stages hierarchically beyond, or at least distinct from, primary auditory cortices. More specifically, the SSV measures may capture physical motion features of real-world sounds-sources (monaural motion cues) that could alert the auditory system to the presence of an auditory object (e.g., a drying machine or ticking clock) even though the object as a whole may not be moving about in the space of one's environment
<italic>per se</italic>
.</p>
<p>In the present study, sounds were presented in a relatively artificial acoustic environment—through ear-buds with the participant's head held still while they were lying in an MRI scanner in the presence of a relatively low acoustic noise floor. Of course, the acoustic contexts in which an individual typically becomes familiar with real-world sound-sources, auditory objects, and acoustic scenes are within a wide variety of noisy acoustic backgrounds. Moreover, the freedom to make frequent head movements helps to entrain the auditory system to disambiguate the location of different sound sources as well as the acoustic features that might uniquely characterize the identity or category of those sources. Accordingly, we further speculate that acoustic attributes such as SSV measures may reflect an acoustic dimensionality reduction that the auditory system can use to probabilistically detect a “stationary” sound-producing object. Such processing would be robust against streaming interference due to different background ambiences, changes in spatial location of the source, and variations in monaural and binaural acoustic cues that occur during normal head movements by the listener. The processing of spectral signal structure variations characteristic of auditory objects may thus share some analogy with size and location invariant properties observed in intermediate visual object processing stages (e.g., the LOC regions), which are important feature extraction stages for figure-ground segregation processing of gross-level object form (Grill-Spector et al.,
<xref rid="B43" ref-type="bibr">1998</xref>
; Doniger et al.,
<xref rid="B21" ref-type="bibr">2000</xref>
; Kourtzi and Kanwisher,
<xref rid="B62" ref-type="bibr">2000</xref>
). In sum, portions of the bilateral STG appear to incorporate SSV attributes, among various other low-level quantifiable signal attributes, which may enable the brain to efficiently distinguish salient auditory “objects” and/or events that can emerge in complex acoustic scenes.</p>
</sec>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We thank Drs. Robert Cox and Ziad Saad for continual development of AFNI and related software for cortical surface data analyses, and Dr. Kristin Ropella for suggestions on acoustic signal processing.</p>
</ack>
<sec>
<title>Funding</title>
<p>This work was supported by the NCRR NIH COBRE grant E15524 (to the Sensory Neuroscience Research Center of West Virginia University).</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Aglioti</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Cesari</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Romani</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Urgesi</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Action anticipation and motor resonance in elite basketball players</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>11</volume>
,
<fpage>1109</fpage>
<lpage>1116</lpage>
.
<pub-id pub-id-type="pmid">19160510</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Allison</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>McCarthy</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Nobre</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Puce</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Belger</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Human extrastriate visual cortex and the perception of faces, words, numbers, and colors</article-title>
.
<source>Cereb. Cortex</source>
<volume>5</volume>
,
<fpage>544</fpage>
<lpage>554</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/4.5.544</pub-id>
<pub-id pub-id-type="pmid">7833655</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Antal</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Droz</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Gyorgyi</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Racz</surname>
<given-names>Z.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Roughness distributions for 1/f alpha signals</article-title>
.
<source>Phys. Rev. E Stat. Nonlin. Soft Matter Phys</source>
.
<volume>65</volume>
,
<fpage>046140</fpage>
.
<pub-id pub-id-type="doi">10.1103/PhysRevE.65.046140</pub-id>
<pub-id pub-id-type="pmid">12005959</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Attias</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Schreiner</surname>
<given-names>C. E.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Temporal low-order statistics of natural sounds</article-title>
.
<source>Adv. Neural Info. Process. Syst</source>
.
<volume>9</volume>
,
<fpage>27</fpage>
<lpage>33.</lpage>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Aziz-Zadeh</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Iacoboni</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zaidel</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Wilson</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Mazziotta</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Left hemisphere motor facilitation in response to manual action sounds</article-title>
.
<source>Eur. J. Neurosci</source>
.
<volume>19</volume>
,
<fpage>2609</fpage>
<lpage>2612</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.0953-816X.2004.03348.x</pub-id>
<pub-id pub-id-type="pmid">15128415</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barsalou</surname>
<given-names>L. W.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Grounded cognition</article-title>
.
<source>Annu. Rev. Psychol</source>
.
<volume>59</volume>
,
<fpage>617</fpage>
<lpage>645</lpage>
.
<pub-id pub-id-type="doi">10.1146/annurev.psych.59.103006.093639</pub-id>
<pub-id pub-id-type="pmid">17705682</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baumgart</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Gaschler-Markefski</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Woldorff</surname>
<given-names>M. G.</given-names>
</name>
<name>
<surname>Heinze</surname>
<given-names>H-J.</given-names>
</name>
<name>
<surname>Scheich</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>A movement-sensitive area in auditory cortex</article-title>
.
<source>Nature</source>
<volume>400</volume>
,
<fpage>724</fpage>
<lpage>725</lpage>
.
<pub-id pub-id-type="doi">10.1038/23385</pub-id>
<pub-id pub-id-type="pmid">10466721</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beauchamp</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Haxby</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Martin</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Parallel visual motion processing streams for manipulable objects and human movements</article-title>
.
<source>Neuron</source>
<volume>34</volume>
,
<fpage>149</fpage>
<lpage>159</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0896-6273(02)00642-6</pub-id>
<pub-id pub-id-type="pmid">11931749</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Belin</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Hoge</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Pike</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Event-related fMRI of the auditory cortex</article-title>
.
<source>Neuroimage</source>
<volume>10</volume>
,
<fpage>417</fpage>
<lpage>429</lpage>
.
<pub-id pub-id-type="doi">10.1006/nimg.1999.0480</pub-id>
<pub-id pub-id-type="pmid">10493900</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bidet-Caulet</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Voisin</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bertrand</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Fonlupt</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Listening to a walking human activates the temporal biological motion area</article-title>
.
<source>Neuroimage</source>
<volume>28</volume>
,
<fpage>132</fpage>
<lpage>139</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2005.06.018</pub-id>
<pub-id pub-id-type="pmid">16027008</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bregman</surname>
<given-names>A. S.</given-names>
</name>
</person-group>
(
<year>1990</year>
).
<source>Auditory Scene Analysis</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Caramazza</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Mahon</surname>
<given-names>B. Z.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The organization of conceptual knowledge: the evidence from category-specific semantic deficits</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>7</volume>
,
<fpage>354</fpage>
<lpage>361</lpage>
.
<pub-id pub-id-type="doi">10.1016/S1364-6613(03)00159-1</pub-id>
<pub-id pub-id-type="pmid">12907231</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cavanagh</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Attention-based motion perception</article-title>
.
<source>Science</source>
<volume>257</volume>
,
<fpage>1563</fpage>
<lpage>1565</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.1523411</pub-id>
<pub-id pub-id-type="pmid">1523411</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chao</surname>
<given-names>L. L.</given-names>
</name>
<name>
<surname>Martin</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Representation of manipulable man-made objects in the dorsal stream</article-title>
.
<source>Neuroimage</source>
<volume>12</volume>
,
<fpage>478</fpage>
<lpage>484</lpage>
.
<pub-id pub-id-type="doi">10.1006/nimg.2000.0635</pub-id>
<pub-id pub-id-type="pmid">10988041</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chubb</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Sperling</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<article-title>Drift-balanced random stimuli: a general basis for studying non-Fourier motion perception</article-title>
.
<source>J. Opt. Soc. Am. A</source>
<volume>5</volume>
,
<fpage>1986</fpage>
<lpage>2007</lpage>
.
<pub-id pub-id-type="pmid">3210090</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cox</surname>
<given-names>R. W.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>AFNI: software for analysis and visualization of functional magnetic resonance neuroimages</article-title>
.
<source>Comput. Biomed. Res</source>
.
<volume>29</volume>
,
<fpage>162</fpage>
<lpage>173</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2005.11.009</pub-id>
<pub-id pub-id-type="pmid">8812068</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cronbach</surname>
<given-names>L. J.</given-names>
</name>
</person-group>
(
<year>1951</year>
).
<article-title>Coefficient alpha and the internal structure of tests</article-title>
.
<source>Psychometrika</source>
<volume>16</volume>
,
<fpage>297</fpage>
<lpage>334</lpage>
.</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cusack</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>The intraparietal sulcus and perceptual organization</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>17</volume>
,
<fpage>641</fpage>
<lpage>651</lpage>
.
<pub-id pub-id-type="doi">10.1162/0898929053467541</pub-id>
<pub-id pub-id-type="pmid">15829084</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cusack</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Carlyon</surname>
<given-names>R. P.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Perceptual asymmetries in audition</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>29</volume>
,
<fpage>713</fpage>
<lpage>725</lpage>
.
<pub-id pub-id-type="pmid">12848335</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>de Lucia</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Camen</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Clarke</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>M. M.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The role of actions in auditory object discrimination</article-title>
.
<source>Neuroimage</source>
<volume>48</volume>
,
<fpage>475</fpage>
<lpage>485</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2009.06.041</pub-id>
<pub-id pub-id-type="pmid">19559091</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Doniger</surname>
<given-names>G. M.</given-names>
</name>
<name>
<surname>Foxe</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Higgins</surname>
<given-names>B. A.</given-names>
</name>
<name>
<surname>Snodgrass</surname>
<given-names>J. G.</given-names>
</name>
<name>
<surname>Schroeder</surname>
<given-names>C. E.</given-names>
</name>
<name>
<surname>Javitt</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Activation timecourse of ventral visual stream object-recognition areas: high density electrical mapping of perceptual closure processes</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>12</volume>
,
<fpage>615</fpage>
<lpage>621</lpage>
.
<pub-id pub-id-type="pmid">10936914</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Downing</surname>
<given-names>P. E.</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Shuman</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>A cortical area selective for visual processing of the human body</article-title>
.
<source>Science</source>
<volume>293</volume>
,
<fpage>2470</fpage>
<lpage>2473</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cognition.2003.10.010</pub-id>
<pub-id pub-id-type="pmid">11577239</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dykstra</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Halgren</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Thesen</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Carlson</surname>
<given-names>C. E.</given-names>
</name>
<name>
<surname>Doyle</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Madsen</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Eskandar</surname>
<given-names>E. N.</given-names>
</name>
<name>
<surname>Cash</surname>
<given-names>S. S.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Widespread brain areas engaged during a classical auditory streaming task revealed by intracranial EEG</article-title>
.
<source>Front. Hum. Neurosci</source>
.
<volume>5</volume>
:
<issue>74</issue>
.
<pub-id pub-id-type="doi">10.3389/fnhum.2011.00074</pub-id>
<pub-id pub-id-type="pmid">21886615</pub-id>
</mixed-citation>
</ref>
<ref id="B21a">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Edmister</surname>
<given-names>W. B.</given-names>
</name>
<name>
<surname>Talavage</surname>
<given-names>T. M.</given-names>
</name>
<name>
<surname>Ledden</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Weisskoff</surname>
<given-names>R. M.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Improved auditory cortex imaging using clustered volume acquisitions</article-title>
.
<source>Hum. Brain Mapp</source>
.
<volume>7</volume>
,
<fpage>89</fpage>
<lpage>97</lpage>
.
<pub-id pub-id-type="pmid">9950066</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Elhilali</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Shamma</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>A cocktail party with a cortical twist: how cortical mechanisms contribute to sound segregation</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>124</volume>
,
<fpage>3751</fpage>
<lpage>3771</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.3001672</pub-id>
<pub-id pub-id-type="pmid">19206802</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Engel</surname>
<given-names>L. R.</given-names>
</name>
<name>
<surname>Frum</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Puce</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Walker</surname>
<given-names>N. A.</given-names>
</name>
<name>
<surname>Lewis</surname>
<given-names>J. W.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Different categories of living and non-living sound-sources activate distinct cortical networks</article-title>
.
<source>Neuroimage</source>
<volume>47</volume>
,
<fpage>1778</fpage>
<lpage>1791</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2009.05.041</pub-id>
<pub-id pub-id-type="pmid">19465134</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Epstein</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>A cortical representation of the local visual environment</article-title>
.
<source>Nature</source>
<volume>392</volume>
,
<fpage>598</fpage>
<lpage>601</lpage>
.
<pub-id pub-id-type="doi">10.1038/33402</pub-id>
<pub-id pub-id-type="pmid">9560155</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Epstein</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Higgins</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Jablonski</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Feiler</surname>
<given-names>A. M.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Visual scene processing in familiar and unfamiliar environments</article-title>
.
<source>J. Neurophysiol</source>
.
<volume>97</volume>
,
<fpage>3670</fpage>
<lpage>3683</lpage>
.
<pub-id pub-id-type="doi">10.1152/jn.00003.2007</pub-id>
<pub-id pub-id-type="pmid">17376855</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Epstein</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Morgan</surname>
<given-names>L. K.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Neural responses to visual scenes reveals inconsistencies between fMRI adaptation and multivoxel pattern analysis</article-title>
.
<source>Neuropsychologia</source>
<volume>50</volume>
,
<fpage>530</fpage>
<lpage>543</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.09.042</pub-id>
<pub-id pub-id-type="pmid">22001314</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Felleman</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>van Essen</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Distributed hierarchical processing in the primate cerebral cortex</article-title>
.
<source>Cereb. Cortex</source>
<volume>1</volume>
,
<fpage>1</fpage>
<lpage>47</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2009.04.061</pub-id>
<pub-id pub-id-type="pmid">1822724</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fransson</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Marrelec</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>The precuneus/posterior cingulate cortex plays a pivotal role in the default mode network: evidence from a partial correlation network analysis</article-title>
.
<source>Neuroimage</source>
<volume>42</volume>
,
<fpage>1178</fpage>
<lpage>1184</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2008.05.059</pub-id>
<pub-id pub-id-type="pmid">18598773</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frith</surname>
<given-names>C. D.</given-names>
</name>
<name>
<surname>Frith</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Interacting minds–a biological basis</article-title>
.
<source>Science</source>
<volume>286</volume>
,
<fpage>1692</fpage>
<lpage>1695</lpage>
.
<pub-id pub-id-type="pmid">10576727</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fritz</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Elhilali</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Shamma</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2007a</year>
).
<article-title>Adaptive changes in cortical receptive fields induced by attention to complex sounds</article-title>
.
<source>J. Neurophysiol</source>
.
<volume>98</volume>
,
<fpage>2337</fpage>
<lpage>2346</lpage>
.
<pub-id pub-id-type="doi">10.1152/jn.00552.2007</pub-id>
<pub-id pub-id-type="pmid">17699691</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fritz</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Elhilali</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>David</surname>
<given-names>S. V.</given-names>
</name>
<name>
<surname>Shamma</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2007b</year>
).
<article-title>Does attention play a role in dynamic receptive field adaptation to changing acoustic salience in A1?</article-title>
<source>Hear. Res</source>
.
<volume>229</volume>
,
<fpage>186</fpage>
<lpage>203</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.heares.2007.01.009</pub-id>
<pub-id pub-id-type="pmid">17329048</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gazzola</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Aziz-Zadeh</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Keysers</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Empathy and the somatotopic auditory mirror system in humans</article-title>
.
<source>Curr. Biol</source>
.
<volume>16</volume>
,
<fpage>1824</fpage>
<lpage>1829</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2006.07.072</pub-id>
<pub-id pub-id-type="pmid">16979560</pub-id>
</mixed-citation>
</ref>
<ref id="B32a">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Glover</surname>
<given-names>G. H.</given-names>
</name>
<name>
<surname>Law</surname>
<given-names>C. S.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Spiral-in/out BOLD fMRI for increased SNR and reduced susceptibility artifacts</article-title>
.
<source>Magn. Reson. Med</source>
.
<volume>46</volume>
,
<fpage>515</fpage>
<lpage>522</lpage>
.
<pub-id pub-id-type="doi">10.1002/mrm.1222</pub-id>
<pub-id pub-id-type="pmid">11550244</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goll</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Crutch</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Central auditory disorders: toward a neuropsychology of auditory objects</article-title>
.
<source>Curr. Opin. Neurol</source>
.
<volume>23</volume>
,
<fpage>617</fpage>
<lpage>627</lpage>
.
<pub-id pub-id-type="doi">10.1097/WCO.0b013e32834027f6</pub-id>
<pub-id pub-id-type="pmid">20975559</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Greicius</surname>
<given-names>M. D.</given-names>
</name>
<name>
<surname>Krasnow</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Reiss</surname>
<given-names>A. L.</given-names>
</name>
<name>
<surname>Menon</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Functional connectivity in the resting brain: a network analysis of the default mode hypothesis</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>100</volume>
,
<fpage>253</fpage>
<lpage>258</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.0135058100</pub-id>
<pub-id pub-id-type="pmid">12506194</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
<name>
<surname>Bench</surname>
<given-names>C. J.</given-names>
</name>
<name>
<surname>Frackowiak</surname>
<given-names>R. S. J.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Human cortical areas selectively activated by apparent sound movement</article-title>
.
<source>Curr. Biol</source>
.
<volume>4</volume>
,
<fpage>892</fpage>
<lpage>895</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0960-9822(00)00198-6</pub-id>
<pub-id pub-id-type="pmid">7850422</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Stephan</surname>
<given-names>K. E.</given-names>
</name>
<name>
<surname>Friston</surname>
<given-names>K. J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Approaches to the cortical analysis of auditory objects</article-title>
.
<source>Hear. Res</source>
.
<volume>229</volume>
,
<fpage>46</fpage>
<lpage>53</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.heares.2007.01.010</pub-id>
<pub-id pub-id-type="pmid">17321704</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>The planum temporale as a computational hub</article-title>
.
<source>Trends Neurosci</source>
.
<volume>25</volume>
,
<fpage>348</fpage>
<lpage>353</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0166-2236(02)02191-4</pub-id>
<pub-id pub-id-type="pmid">12079762</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>What is an auditory object?</article-title>
<source>Nat. Rev. Neurosci</source>
.
<volume>5</volume>
,
<fpage>887</fpage>
<lpage>892</lpage>
.
<pub-id pub-id-type="doi">10.1038/nrn1538</pub-id>
<pub-id pub-id-type="pmid">15496866</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grill-Spector</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kushnir</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Edelman</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Avidan</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Itzchak</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Malach</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Differential processing of objects under various viewing conditions in the human lateral occipital complex</article-title>
.
<source>Neuron</source>
<volume>24</volume>
,
<fpage>187</fpage>
<lpage>203</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0896-6273(00)80832-6</pub-id>
<pub-id pub-id-type="pmid">10677037</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grill-Spector</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kushnir</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Hendler</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Edelman</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Itzchak</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Malach</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>A sequence of object-processing stages revealed by fMRI in the human occipital lobe</article-title>
.
<source>Hum. Brain Mapp</source>
.
<volume>6</volume>
,
<fpage>316</fpage>
<lpage>328</lpage>
.
<pub-id pub-id-type="pmid">9704268</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gron</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Wunderlich</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Spitzer</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tomczak</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Riepe</surname>
<given-names>M. W.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Brain activation during human navigation: gender-different neural networks as substrate of performance</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>3</volume>
,
<fpage>404</fpage>
<lpage>408</lpage>
.
<pub-id pub-id-type="doi">10.1038/73980</pub-id>
<pub-id pub-id-type="pmid">10725932</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gygi</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Kidd</surname>
<given-names>G. R.</given-names>
</name>
<name>
<surname>Watson</surname>
<given-names>C. S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Spectral-temporal factors in the identification of environmental sounds</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>115</volume>
,
<fpage>1252</fpage>
<lpage>1265</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.1635840</pub-id>
<pub-id pub-id-type="pmid">15058346</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hall</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Haggard</surname>
<given-names>M. P.</given-names>
</name>
<name>
<surname>Akeroyd</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Palmer</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Summerfield</surname>
<given-names>A. Q.</given-names>
</name>
<name>
<surname>Elliott</surname>
<given-names>M. R.</given-names>
</name>
<name>
<surname>Gurney</surname>
<given-names>E. M.</given-names>
</name>
<name>
<surname>Bowtell</surname>
<given-names>R. W.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>“Sparse” temporal sampling in auditory fMRI</article-title>
.
<source>Hum. Brain Mapp</source>
.
<volume>7</volume>
,
<fpage>213</fpage>
<lpage>223</lpage>
.
<pub-id pub-id-type="pmid">10194620</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hassabis</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Kumaran</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Maguire</surname>
<given-names>E. A.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Using imagination to understand the neural basis of episodic memory</article-title>
.
<source>J. Neurosci</source>
.
<volume>27</volume>
,
<fpage>14365</fpage>
<lpage>14374</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4549-07.2007</pub-id>
<pub-id pub-id-type="pmid">18160644</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hasson</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Harel</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Levy</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Malach</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Large-scale mirror-symmetry organization of human occipito-temporal object areas</article-title>
.
<source>Neuron</source>
<volume>37</volume>
,
<fpage>1027</fpage>
<lpage>1041</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0896-6273(03)00144-2</pub-id>
<pub-id pub-id-type="pmid">12670430</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Howard</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Volkov</surname>
<given-names>I. O.</given-names>
</name>
<name>
<surname>Mirsky</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Garell</surname>
<given-names>P. C.</given-names>
</name>
<name>
<surname>Noh</surname>
<given-names>M. D.</given-names>
</name>
<name>
<surname>Granner</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Damasio</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Steinschneider</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Reale</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Hind</surname>
<given-names>J. E.</given-names>
</name>
<name>
<surname>Brugge</surname>
<given-names>J. F.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Auditory cortex on the human posterior superior temporal gyrus</article-title>
.
<source>J. Comp. Neurol</source>
.
<volume>416</volume>
,
<fpage>79</fpage>
<lpage>92</lpage>
.
<pub-id pub-id-type="pmid">10578103</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huddleston</surname>
<given-names>W. E.</given-names>
</name>
<name>
<surname>Lewis</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Phinney</surname>
<given-names>R. E. Jr.</given-names>
</name>
<name>
<surname>de Yoe</surname>
<given-names>E. A.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Auditory and visual attention-based apparent motion share functional parallels</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>70</volume>
,
<fpage>1207</fpage>
<lpage>1216</lpage>
.
<pub-id pub-id-type="doi">10.3758/PP.70.7.1207</pub-id>
<pub-id pub-id-type="pmid">18927004</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hughes</surname>
<given-names>H. C.</given-names>
</name>
<name>
<surname>Darcey</surname>
<given-names>T. M.</given-names>
</name>
<name>
<surname>Barkan</surname>
<given-names>H. I.</given-names>
</name>
<name>
<surname>Williamson</surname>
<given-names>P. D.</given-names>
</name>
<name>
<surname>Roberts</surname>
<given-names>D. W.</given-names>
</name>
<name>
<surname>Aslin</surname>
<given-names>C. H.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Responses of human auditory association cortex to the omission of an expected acoustic event</article-title>
.
<source>Neuroimage</source>
<volume>13</volume>
,
<fpage>1073</fpage>
<lpage>1089</lpage>
.
<pub-id pub-id-type="doi">10.1006/nimg.2001.0766</pub-id>
<pub-id pub-id-type="pmid">11352613</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Husain</surname>
<given-names>F. T.</given-names>
</name>
<name>
<surname>Tagamets</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Fromm</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Braun</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Horwitz</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Relating neuronal dynamics for auditory object processing to neuroimaging activity: a computational modeling and an fMRI study</article-title>
.
<source>Neuroimage</source>
<volume>21</volume>
,
<fpage>1701</fpage>
<lpage>1720</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2003.11.012</pub-id>
<pub-id pub-id-type="pmid">15050592</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Iacoboni</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Molnar-Szakacs</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Gallese</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Buccino</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Mazziotta</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Rizzolatti</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Grasping the intentions of others with one's own mirror neuron system</article-title>
.
<source>PLoS Biol</source>
.
<volume>3</volume>
:
<fpage>e79</fpage>
.
<fpage>529</fpage>
<lpage>535</lpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pbio.0030079</pub-id>
<pub-id pub-id-type="pmid">15736981</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johansson</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1973</year>
).
<article-title>Visual perception of biological motion and a model for its analysis</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>14</volume>
,
<fpage>201</fpage>
<lpage>211</lpage>
.</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Julesz</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>1980</year>
).
<article-title>Spatial nonlinearities in the instantaneous perception of textures with identical power spectra</article-title>
.
<source>Philos. Trans. R. Soc. Lond. B Biol. Sci</source>
.
<volume>290</volume>
,
<fpage>83</fpage>
<lpage>94</lpage>
.
<pub-id pub-id-type="doi">10.1098/rstb.1980.0084</pub-id>
<pub-id pub-id-type="pmid">6106244</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kaas</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Hackett</surname>
<given-names>T. A.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Subdivisions of auditory cortex and levels of processing in primates</article-title>
.
<source>Audiol. Neurootol</source>
.
<volume>3</volume>
,
<fpage>73</fpage>
<lpage>85</lpage>
.
<pub-id pub-id-type="pmid">9575378</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kaas</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Hackett</surname>
<given-names>T. A.</given-names>
</name>
<name>
<surname>Tramo</surname>
<given-names>M. J.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Auditory processing in primate cerebral cortex</article-title>
.
<source>Curr. Opin. Neurobiol</source>
.
<volume>9</volume>
,
<fpage>164</fpage>
<lpage>170</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0959-4388(99)80022-1</pub-id>
<pub-id pub-id-type="pmid">10322185</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Chun</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>McDermott</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ledden</surname>
<given-names>P. J.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Functional imaging of human visual recognition</article-title>
.
<source>Brain Res. Cogn. Brain Res</source>
.
<volume>5</volume>
,
<fpage>55</fpage>
<lpage>67</lpage>
.
<pub-id pub-id-type="pmid">9049071</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>McDermott</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chun</surname>
<given-names>M. M.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>The fusiform face area: a module in human extrastriate cortex specialized for face perception</article-title>
.
<source>J. Neurosci</source>
.
<volume>17</volume>
,
<fpage>4302</fpage>
<lpage>4311</lpage>
.
<pub-id pub-id-type="pmid">9151747</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>King</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Nelken</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Unraveling the principles of auditory cortical processing: can we learn from the visual system?</article-title>
<source>Nat. Neurosci</source>
.
<volume>12</volume>
,
<fpage>698</fpage>
<lpage>701</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn.2308</pub-id>
<pub-id pub-id-type="pmid">19471268</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kohler</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Keysers</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Umilta</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Fogassi</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Gallese</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Rizzolatti</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Hearing sounds, understanding actions: action representation in mirror neurons</article-title>
.
<source>Science</source>
<volume>297</volume>
,
<fpage>846</fpage>
<lpage>848</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.1070311</pub-id>
<pub-id pub-id-type="pmid">12161656</pub-id>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kourtzi</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Cortical regions involved in perceiving object shape</article-title>
.
<source>J. Neurosci</source>
.
<volume>20</volume>
,
<fpage>3310</fpage>
<lpage>3318</lpage>
.
<pub-id pub-id-type="pmid">10777794</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kumar</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Stephan</surname>
<given-names>K. E.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Friston</surname>
<given-names>K. J.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Hierarchical processing of auditory objects in humans</article-title>
.
<source>PLoS Comput. Biol</source>
.
<volume>3</volume>
:
<fpage>e100</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pcbi.0030100</pub-id>
<pub-id pub-id-type="pmid">17542641</pub-id>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Laaksonen</surname>
<given-names>J. T.</given-names>
</name>
<name>
<surname>Markus Koskela</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Oja</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Class distributions on SOM surfaces for feature extraction and object retrieval</article-title>
.
<source>Neural Netw</source>
.
<volume>17</volume>
,
<fpage>1121</fpage>
<lpage>1133</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neunet.2004.07.007</pub-id>
<pub-id pub-id-type="pmid">15555856</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Leaver</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Rauschecker</surname>
<given-names>J. P.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Cortical representation of natural complex sounds: effects of acoustic features and auditory object category</article-title>
.
<source>J. Neurosci</source>
.
<volume>30</volume>
,
<fpage>7604</fpage>
<lpage>7612</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0296-10.2010</pub-id>
<pub-id pub-id-type="pmid">20519535</pub-id>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Leech</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Holt</surname>
<given-names>L. L.</given-names>
</name>
<name>
<surname>Devlin</surname>
<given-names>J. T.</given-names>
</name>
<name>
<surname>Dick</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Expertise with artificial nonspeech sounds recruits speech-sensitive cortical regions</article-title>
.
<source>J. Neurosci</source>
.
<volume>29</volume>
,
<fpage>5234</fpage>
<lpage>5239</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.5758-08.2009</pub-id>
<pub-id pub-id-type="pmid">19386919</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>J. W.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>“Audio-visual perception of everyday natural objects – hemodynamic studies in humans,”</article-title>
in
<source>Multisensory Object Perception in the Primate Brain</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Marcusand</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Naumer</surname>
<given-names>P. J. K.</given-names>
</name>
</person-group>
(
<publisher-name>Springer Science+Business Media, LLC</publisher-name>
),
<fpage>155</fpage>
<lpage>190</lpage>
.</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Beauchamp</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>de Yoe</surname>
<given-names>E. A.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>A comparison of visual and auditory motion processing in human cerebral cortex</article-title>
.
<source>Cereb. Cortex</source>
<volume>10</volume>
,
<fpage>873</fpage>
<lpage>888</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/10.9.873</pub-id>
<pub-id pub-id-type="pmid">10982748</pub-id>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Phinney</surname>
<given-names>R. E.</given-names>
</name>
<name>
<surname>Brefczynski-Lewis</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>de Yoe</surname>
<given-names>E. A.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Lefties get it “right” when hearing tool sounds</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>18</volume>
,
<fpage>1314</fpage>
<lpage>1330</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn.2006.18.8.1314</pub-id>
<pub-id pub-id-type="pmid">16859417</pub-id>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Talkington</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Puce</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Engel</surname>
<given-names>L. R.</given-names>
</name>
<name>
<surname>Frum</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Cortical networks representing object categories and high-level attributes of familiar real-world action sounds</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>23</volume>
,
<fpage>2079</fpage>
<lpage>2101</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn.2010.21570</pub-id>
<pub-id pub-id-type="pmid">20812786</pub-id>
</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Talkington</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Walker</surname>
<given-names>N. A.</given-names>
</name>
<name>
<surname>Spirou</surname>
<given-names>G. A.</given-names>
</name>
<name>
<surname>Jajosky</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Frum</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Brefczynski-Lewis</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Human cortical organization for processing vocalizations indicates representation of harmonic structure as a signal attribute</article-title>
.
<source>J. Neurosci</source>
.
<volume>29</volume>
,
<fpage>2283</fpage>
<lpage>2296</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4145-08.2009</pub-id>
<pub-id pub-id-type="pmid">19228981</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Wightman</surname>
<given-names>F. L.</given-names>
</name>
<name>
<surname>Brefczynski</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Phinney</surname>
<given-names>R. E.</given-names>
</name>
<name>
<surname>Binder</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>de Yoe</surname>
<given-names>E. A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Human brain regions involved in recognizing environmental sounds</article-title>
.
<source>Cereb. Cortex</source>
<volume>14</volume>
,
<fpage>1008</fpage>
<lpage>1021</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhh061</pub-id>
<pub-id pub-id-type="pmid">15166097</pub-id>
</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Macevoy</surname>
<given-names>S. P.</given-names>
</name>
<name>
<surname>Epstein</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Constructing scenes from objects in human occipitotemporal cortex</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>14</volume>
,
<fpage>1323</fpage>
<lpage>1329</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn.2903</pub-id>
<pub-id pub-id-type="pmid">21892156</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maeder</surname>
<given-names>P. P.</given-names>
</name>
<name>
<surname>Meuli</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Adriani</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bellmann</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Fornari</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Thiran</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Pittet</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Clarke</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Distinct pathways involved in sound recognition and localization: a human fMRI study</article-title>
.
<source>Neuroimage</source>
<volume>14</volume>
,
<fpage>802</fpage>
<lpage>816</lpage>
.
<pub-id pub-id-type="doi">10.1006/nimg.2001.0888</pub-id>
<pub-id pub-id-type="pmid">11554799</pub-id>
</mixed-citation>
</ref>
<ref id="B75">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mäkelä</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>McEvoy</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Auditory evoked fields to illusory sound source movements</article-title>
.
<source>Exp. Brain Res</source>
.
<volume>110</volume>
,
<fpage>446</fpage>
<lpage>453</lpage>
.
<pub-id pub-id-type="pmid">8871103</pub-id>
</mixed-citation>
</ref>
<ref id="B76">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Malach</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Reppas</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Benson</surname>
<given-names>R. R.</given-names>
</name>
<name>
<surname>Kwong</surname>
<given-names>K. K.</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Kennedy</surname>
<given-names>W. A.</given-names>
</name>
<name>
<surname>Ledden</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Brady</surname>
<given-names>T. J.</given-names>
</name>
<name>
<surname>Rosen</surname>
<given-names>B. R.</given-names>
</name>
<name>
<surname>Tootell</surname>
<given-names>R. B. H.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>92</volume>
,
<fpage>8135</fpage>
<lpage>8139</lpage>
.
<pub-id pub-id-type="pmid">7667258</pub-id>
</mixed-citation>
</ref>
<ref id="B77">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Martin</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>The representation of object concepts in the brain</article-title>
.
<source>Annu. Rev. Psychol</source>
.
<volume>58</volume>
,
<fpage>25</fpage>
<lpage>45</lpage>
.
<pub-id pub-id-type="doi">10.1146/annurev.psych.57.102904.190143</pub-id>
<pub-id pub-id-type="pmid">16968210</pub-id>
</mixed-citation>
</ref>
<ref id="B78">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McCarthy</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Puce</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gore</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Allison</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Face-specific processing in the human fusiform gyrus</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>9</volume>
,
<fpage>605</fpage>
<lpage>610</lpage>
.
<pub-id pub-id-type="pmid">23965119</pub-id>
</mixed-citation>
</ref>
<ref id="B79">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McDermott</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Oxenham</surname>
<given-names>A. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Spectral completion of partially masked sounds</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>105</volume>
,
<fpage>5939</fpage>
<lpage>5944</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.0711291105</pub-id>
<pub-id pub-id-type="pmid">18391210</pub-id>
</mixed-citation>
</ref>
<ref id="B80">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McDermott</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>E. P.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Sound texture perception via statistics of the auditory periphery: evidence from sound synthesis</article-title>
.
<source>Neuron</source>
<volume>71</volume>
,
<fpage>926</fpage>
<lpage>940</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuron.2011.06.032</pub-id>
<pub-id pub-id-type="pmid">21903084</pub-id>
</mixed-citation>
</ref>
<ref id="B81">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Medvedev</surname>
<given-names>A. V.</given-names>
</name>
<name>
<surname>Chiao</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kanwal</surname>
<given-names>J. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Modeling complex tone perception: grouping harmonics with combination-sensitive neurons</article-title>
.
<source>Biol. Cybern</source>
.
<volume>86</volume>
,
<fpage>497</fpage>
<lpage>505</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00422-002-0316-3</pub-id>
<pub-id pub-id-type="pmid">12111277</pub-id>
</mixed-citation>
</ref>
<ref id="B82">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Minda</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Ross</surname>
<given-names>B. H.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Learning categories by making predictions: an investigation of indirect category learning</article-title>
.
<source>Mem. Cognit</source>
.
<volume>32</volume>
,
<fpage>1355</fpage>
<lpage>1368</lpage>
.
<pub-id pub-id-type="pmid">15900929</pub-id>
</mixed-citation>
</ref>
<ref id="B83">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mormann</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Dubois</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kornblith</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Milosavljevic</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cerf</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ison</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tsuchiya</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Kraskov</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Quiroga</surname>
<given-names>R. Q.</given-names>
</name>
<name>
<surname>Adolphs</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Fried</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Koch</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>A category-specific response to animals in the right human amygdala</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>14</volume>
,
<fpage>1247</fpage>
<lpage>1249</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn.2899</pub-id>
<pub-id pub-id-type="pmid">21874014</pub-id>
</mixed-citation>
</ref>
<ref id="B84">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morosan</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Rademacher</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Schleicher</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Amunts</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Schormann</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Human primary auditory cortex: cytoarchitectonic subdivisions and mapping into a spatial reference system</article-title>
.
<source>Neuroimage</source>
<volume>13</volume>
,
<fpage>684</fpage>
<lpage>701</lpage>
.
<pub-id pub-id-type="doi">10.1006/nimg.2000.0715</pub-id>
<pub-id pub-id-type="pmid">11305897</pub-id>
</mixed-citation>
</ref>
<ref id="B85">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Murray</surname>
<given-names>S. O.</given-names>
</name>
<name>
<surname>Newman</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Roder</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Mitchell</surname>
<given-names>T. V.</given-names>
</name>
<name>
<surname>Takahashi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Neville</surname>
<given-names>H. J.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Functional organization of auditory motion processing in humans using fMRI</article-title>
.
<source>Soc. Neurosci. Abstr</source>
.
<volume>24</volume>
,
<fpage>1401</fpage>
.</mixed-citation>
</ref>
<ref id="B86">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nelken</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Processing of complex stimuli and natural scenes in the auditory cortex</article-title>
.
<source>Curr. Opin. Neurobiol</source>
.
<volume>14</volume>
,
<fpage>474</fpage>
<lpage>480</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.conb.2004.06.005</pub-id>
<pub-id pub-id-type="pmid">15321068</pub-id>
</mixed-citation>
</ref>
<ref id="B87">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Nunnally</surname>
<given-names>J. C.</given-names>
</name>
</person-group>
(
<year>1978</year>
).
<source>Psychometric Theory</source>
.
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>McGraw-Hill</publisher-name>
. </mixed-citation>
</ref>
<ref id="B88">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Obleser</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Eisner</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Bilateral speech comprehension reflects differential sensitivity to spectral and temporal features</article-title>
.
<source>J. Neurosci</source>
.
<volume>28</volume>
,
<fpage>8116</fpage>
<lpage>8123</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.1290-08.2008</pub-id>
<pub-id pub-id-type="pmid">18685036</pub-id>
</mixed-citation>
</ref>
<ref id="B89">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Obleser</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zimmermann</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>van Meter</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Rauschecker</surname>
<given-names>J. P.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Multiple stages of auditory speech perception reflected in event-related FMRI</article-title>
.
<source>Cereb. Cortex</source>
<volume>17</volume>
,
<fpage>2251</fpage>
<lpage>2257</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhl133</pub-id>
<pub-id pub-id-type="pmid">17150986</pub-id>
</mixed-citation>
</ref>
<ref id="B90">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Overath</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>von Kriegstein</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Cusack</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Rees</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Cortical mechanisms for the segregation and representation of acoustic textures</article-title>
.
<source>J. Neurosci</source>
.
<volume>30</volume>
,
<fpage>2070</fpage>
<lpage>2076</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.5378-09.2010</pub-id>
<pub-id pub-id-type="pmid">20147535</pub-id>
</mixed-citation>
</ref>
<ref id="B91">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pelphrey</surname>
<given-names>K. A.</given-names>
</name>
<name>
<surname>Morris</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>McCarthy</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Grasping the intentions of others: the perceived intentionality of an action influences activity in the superior temporal sulcus during social perception</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>16</volume>
,
<fpage>1706</fpage>
<lpage>1716</lpage>
.
<pub-id pub-id-type="doi">10.1162/0898929042947900</pub-id>
<pub-id pub-id-type="pmid">15701223</pub-id>
</mixed-citation>
</ref>
<ref id="B92">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rademacher</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Morosan</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Schormann</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Schleicher</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Werner</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Freund</surname>
<given-names>H. J.</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Probabilistic mapping and volume measurement of human primary auditory cortex</article-title>
.
<source>Neuroimage</source>
<volume>13</volume>
,
<fpage>669</fpage>
<lpage>683</lpage>
.
<pub-id pub-id-type="doi">10.1006/nimg.2000.0714</pub-id>
<pub-id pub-id-type="pmid">11305896</pub-id>
</mixed-citation>
</ref>
<ref id="B93">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Raichle</surname>
<given-names>M. E.</given-names>
</name>
<name>
<surname>MacLeod</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Snyder</surname>
<given-names>A. Z.</given-names>
</name>
<name>
<surname>Powers</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Gusnard</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Shulman</surname>
<given-names>G. L.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>A default mode of brain function</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>98</volume>
,
<fpage>676</fpage>
<lpage>682</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.98.2.676</pub-id>
<pub-id pub-id-type="pmid">11209064</pub-id>
</mixed-citation>
</ref>
<ref id="B94">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rauschecker</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Scott</surname>
<given-names>S. K.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>12</volume>
,
<fpage>718</fpage>
<lpage>724</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn.2331</pub-id>
<pub-id pub-id-type="pmid">19471271</pub-id>
</mixed-citation>
</ref>
<ref id="B95">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rauschecker</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Tian</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Hauser</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Processing of complex sounds in the macaque nonprimary auditory cortex</article-title>
.
<source>Science</source>
<volume>268</volume>
,
<fpage>111</fpage>
<lpage>114</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.7701330</pub-id>
<pub-id pub-id-type="pmid">7701330</pub-id>
</mixed-citation>
</ref>
<ref id="B96">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reddy</surname>
<given-names>R. K.</given-names>
</name>
<name>
<surname>Ramachandra</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Singh</surname>
<given-names>N. C.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Categorization of environmental sounds</article-title>
.
<source>Biol. Cybern</source>
.
<volume>100</volume>
,
<fpage>299</fpage>
<lpage>306</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00422-009-0299-4</pub-id>
<pub-id pub-id-type="pmid">19259694</pub-id>
</mixed-citation>
</ref>
<ref id="B97">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rizzolatti</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Craighero</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The mirror-neuron system</article-title>
.
<source>Annu. Rev. Neurosci</source>
.
<volume>27</volume>
,
<fpage>169</fpage>
<lpage>192</lpage>
.
<pub-id pub-id-type="doi">10.1146/annurev.neuro.27.070203.144230</pub-id>
<pub-id pub-id-type="pmid">15217330</pub-id>
</mixed-citation>
</ref>
<ref id="B98">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rizzolatti</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Luppino</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Matelli</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>The organization of the cortical motor system: new concepts</article-title>
.
<source>Electroencephalogr. Clin. Neurophysiol</source>
.
<volume>106</volume>
,
<fpage>283</fpage>
<lpage>296</lpage>
.
<pub-id pub-id-type="pmid">9741757</pub-id>
</mixed-citation>
</ref>
<ref id="B99">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rosch</surname>
<given-names>E. H.</given-names>
</name>
</person-group>
(
<year>1973</year>
).
<article-title>Natural categories</article-title>
.
<source>Cogn. Psychol</source>
.
<volume>4</volume>
,
<fpage>328</fpage>
<lpage>350</lpage>
.</mixed-citation>
</ref>
<ref id="B100">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rutishauser</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Tudusciuc</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Neumann</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Mamelak</surname>
<given-names>A. N.</given-names>
</name>
<name>
<surname>Heller</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Ross</surname>
<given-names>I. B.</given-names>
</name>
<name>
<surname>Philpott</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Sutherling</surname>
<given-names>W. W.</given-names>
</name>
<name>
<surname>Adolphs</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Single-unit responses selective for whole faces in the human amygdala</article-title>
.
<source>Curr. Biol</source>
.
<volume>21</volume>
,
<fpage>1654</fpage>
<lpage>1660</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2011.08.035</pub-id>
<pub-id pub-id-type="pmid">21962712</pub-id>
</mixed-citation>
</ref>
<ref id="B101">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sathian</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Lacey</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Stilla</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Gibson</surname>
<given-names>G. O.</given-names>
</name>
<name>
<surname>Deshpande</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Laconte</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Glielmi</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Dual pathways for haptic and visual perception of spatial and texture information</article-title>
.
<source>Neuroimage</source>
<volume>57</volume>
,
<fpage>462</fpage>
<lpage>475</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.05.001</pub-id>
<pub-id pub-id-type="pmid">21575727</pub-id>
</mixed-citation>
</ref>
<ref id="B97a">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Talairach</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tournoux</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<source>Co-Planar Stereotaxic Atlas of the Human Brain</source>
.
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Thieme Medical Publishers</publisher-name>
.</mixed-citation>
</ref>
<ref id="B102">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Talkington</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Rapuano</surname>
<given-names>K. M.</given-names>
</name>
<name>
<surname>Hitt</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Frum</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>Lewis</surname>
<given-names>J. W.</given-names>
</name>
</person-group>
(
<year>in press</year>
)
<article-title>Humans mimicking animals: a cortical hierarchy for human vocal communication sounds</article-title>
.
<source>J. Neurosci</source>
.</mixed-citation>
</ref>
<ref id="B103">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tchernichovski</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Mitra</surname>
<given-names>P. P.</given-names>
</name>
<name>
<surname>Lints</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Nottebohm</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Dynamics of the vocal imitation process: how a zebra finch learns its song</article-title>
.
<source>Science</source>
<volume>291</volume>
,
<fpage>2564</fpage>
<lpage>2569</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.1058522</pub-id>
<pub-id pub-id-type="pmid">11283361</pub-id>
</mixed-citation>
</ref>
<ref id="B104">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teki</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chait</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>von Kriegstein</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Brain bases for auditory stimulus-driven figure-ground segregation</article-title>
.
<source>J. Neurosci</source>
.
<volume>31</volume>
,
<fpage>164</fpage>
<lpage>171</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3788-10.2011</pub-id>
<pub-id pub-id-type="pmid">21209201</pub-id>
</mixed-citation>
</ref>
<ref id="B105">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tootell</surname>
<given-names>R. B.</given-names>
</name>
<name>
<surname>Mendola</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Hadjikhani</surname>
<given-names>N. K.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>A. K.</given-names>
</name>
<name>
<surname>Dale</surname>
<given-names>A. M.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>The representation of the ipsilateral visual field in human cerebral cortex</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>95</volume>
,
<fpage>818</fpage>
<lpage>824</lpage>
.
<pub-id pub-id-type="pmid">9448246</pub-id>
</mixed-citation>
</ref>
<ref id="B106">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Essen</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>A Population-Average, Landmark- and Surfacebased (PALS) atlas of human cerebral cortex</article-title>
.
<source>Neuroimage</source>
<volume>28</volume>
,
<fpage>635</fpage>
<lpage>662</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2005.06.058</pub-id>
<pub-id pub-id-type="pmid">16172003</pub-id>
</mixed-citation>
</ref>
<ref id="B107">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Essen</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Drury</surname>
<given-names>H. A.</given-names>
</name>
<name>
<surname>Dickson</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Harwell</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hanlon</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Anderson</surname>
<given-names>C. H.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>An integrated software suite for surface-based analyses of cerebral cortex</article-title>
.
<source>J. Am. Med. Inform. Assoc</source>
.
<volume>8</volume>
,
<fpage>443</fpage>
<lpage>459</lpage>
.
<pub-id pub-id-type="pmid">11522765</pub-id>
</mixed-citation>
</ref>
<ref id="B108">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vogt</surname>
<given-names>B. A.</given-names>
</name>
<name>
<surname>Finch</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Olson</surname>
<given-names>C. R.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Functional heterogeneity in cingulate cortex: the anterior executive and posterior evaluative regions</article-title>
.
<source>Cereb. Cortex</source>
<volume>2</volume>
,
<fpage>435</fpage>
<lpage>443</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/2.6.435-a</pub-id>
<pub-id pub-id-type="pmid">1477524</pub-id>
</mixed-citation>
</ref>
<ref id="B109">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Voss</surname>
<given-names>R. F.</given-names>
</name>
<name>
<surname>Clarke</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1975</year>
).
<article-title>1/f noise in music and speech</article-title>
.
<source>Nature</source>
<volume>258</volume>
,
<fpage>317</fpage>
<lpage>318</lpage>
.</mixed-citation>
</ref>
<ref id="B110">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zielinski</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Green</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Rauschecker</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Perception of sound-source motion by the human brain</article-title>
.
<source>Neuron</source>
<volume>34</volume>
,
<fpage>139</fpage>
<lpage>148</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0896-6273(02)00637-2</pub-id>
<pub-id pub-id-type="pmid">11931748</pub-id>
</mixed-citation>
</ref>
<ref id="B111">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>R. M.</given-names>
</name>
<name>
<surname>Obusek</surname>
<given-names>C. J.</given-names>
</name>
<name>
<surname>Ackroff</surname>
<given-names>J. M.</given-names>
</name>
</person-group>
(
<year>1972</year>
).
<article-title>Auditory induction: perceptual synthesis of absent sounds</article-title>
.
<source>Science</source>
<volume>176</volume>
,
<fpage>1149</fpage>
<lpage>1151</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.176.4039.1149</pub-id>
<pub-id pub-id-type="pmid">5035477</pub-id>
</mixed-citation>
</ref>
<ref id="B112">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Woods</surname>
<given-names>D. L.</given-names>
</name>
<name>
<surname>Herron</surname>
<given-names>T. J.</given-names>
</name>
<name>
<surname>Cate</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Yund</surname>
<given-names>E. W.</given-names>
</name>
<name>
<surname>Stecker</surname>
<given-names>G. C.</given-names>
</name>
<name>
<surname>Rinne</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kang</surname>
<given-names>X.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Functional properties of human auditory cortical fields</article-title>
.
<source>Front. Syst. Neurosci</source>
.
<volume>4</volume>
:
<issue>155</issue>
.
<pub-id pub-id-type="doi">10.3389/fnsys.2010.00155</pub-id>
<pub-id pub-id-type="pmid">21160558</pub-id>
</mixed-citation>
</ref>
<ref id="B113">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Bouffard</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Belin</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Sensitivity to auditory object features in human temporal neocortex</article-title>
.
<source>J. Neurosci</source>
.
<volume>24</volume>
,
<fpage>3637</fpage>
<lpage>3642</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.5458-03.2004</pub-id>
<pub-id pub-id-type="pmid">15071112</pub-id>
</mixed-citation>
</ref>
<ref id="B114">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Gjedde</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Lateralization of phonetic and pitch discrimination in speech processing</article-title>
.
<source>Science</source>
<volume>256</volume>
,
<fpage>846</fpage>
<lpage>849</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.256.5058.846</pub-id>
<pub-id pub-id-type="pmid">1589767</pub-id>
</mixed-citation>
</ref>
</ref-list>
<app-group>
<app id="A1">
<title>Appendix</title>
<p>
<table-wrap id="TA1" position="anchor">
<label>Table A1</label>
<caption>
<p>
<bold>List of sound stimuli, ordered by object-like to scene-like Likert ratings</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Environmental (57 sounds)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Likert rating</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Mechanical (54 sounds)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Likert rating</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Mud bubbling</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>1.9</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Clock ticking #2</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>1.1</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Water bubbling #1</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>1.9</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Stopwatch ticking</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>1.1</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Water bubbling #2</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>2.1</bold>
</td>
<td align="left" rowspan="1" colspan="1">Clock ticking #3, grandfather (cavernous)</td>
<td align="left" rowspan="1" colspan="1">1.1</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Fire in fire place, loud cracks</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>2.4</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Clock ticking, medium size</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>1.2</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Water dripping, quietly</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>2.6</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Egg timer, ticking</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>1.3</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Water dripping in cave #1</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>2.7</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Fax machine</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>1.4</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Water running, stream</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>2.8</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Printer, slow rate</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>1.4</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Water bubbling, hot tub #2</td>
<td align="left" rowspan="1" colspan="1">3.0</td>
<td align="left" rowspan="1" colspan="1">
<bold>Scanner adjusting</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>1.4</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Glacier break;</td>
<td align="left" rowspan="1" colspan="1">3.1</td>
<td align="left" rowspan="1" colspan="1">Antique clock chiming and ticking</td>
<td align="left" rowspan="1" colspan="1">1.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Heavy rain</td>
<td align="left" rowspan="1" colspan="1">3.1</td>
<td align="left" rowspan="1" colspan="1">Paint can lid rolling on floor</td>
<td align="left" rowspan="1" colspan="1">1.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Water dripping in cave #2</td>
<td align="left" rowspan="1" colspan="1">3.1</td>
<td align="left" rowspan="1" colspan="1">Fax or copy machine adjusting</td>
<td align="left" rowspan="1" colspan="1">1.5</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Lake water wave ashore</td>
<td align="left" rowspan="1" colspan="1">3.2</td>
<td align="left" rowspan="1" colspan="1">Church bell chimes</td>
<td align="left" rowspan="1" colspan="1">1.6</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Waves, large lake</td>
<td align="left" rowspan="1" colspan="1">3.2</td>
<td align="left" rowspan="1" colspan="1">Money falling out of slot machine</td>
<td align="left" rowspan="1" colspan="1">1.7</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Fire cackling, big forest</td>
<td align="left" rowspan="1" colspan="1">3.3</td>
<td align="left" rowspan="1" colspan="1">Church bells ringing</td>
<td align="left" rowspan="1" colspan="1">1.8</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Fire in fire place</td>
<td align="left" rowspan="1" colspan="1">3.3</td>
<td align="left" rowspan="1" colspan="1">Industry generator, compressor</td>
<td align="left" rowspan="1" colspan="1">1.8</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind blowing #1</td>
<td align="left" rowspan="1" colspan="1">3.3</td>
<td align="left" rowspan="1" colspan="1">Printer, rotor movements</td>
<td align="left" rowspan="1" colspan="1">1.8</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Large river, flowing</td>
<td align="left" rowspan="1" colspan="1">3.4</td>
<td align="left" rowspan="1" colspan="1">Film projector</td>
<td align="left" rowspan="1" colspan="1">2.0</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Rain fall, medium hard</td>
<td align="left" rowspan="1" colspan="1">3.4</td>
<td align="left" rowspan="1" colspan="1">Clocks, several ticking</td>
<td align="left" rowspan="1" colspan="1">2.1</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Small waterfall</td>
<td align="left" rowspan="1" colspan="1">3.4</td>
<td align="left" rowspan="1" colspan="1">Fax machine</td>
<td align="left" rowspan="1" colspan="1">2.1</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Forest fire</td>
<td align="left" rowspan="1" colspan="1">3.5</td>
<td align="left" rowspan="1" colspan="1">Printer, dot matrix</td>
<td align="left" rowspan="1" colspan="1">2.1</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Fire crackling #1</td>
<td align="left" rowspan="1" colspan="1">3.6</td>
<td align="left" rowspan="1" colspan="1">Airplane, propeller</td>
<td align="left" rowspan="1" colspan="1">2.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Fire crackling #2</td>
<td align="left" rowspan="1" colspan="1">3.7</td>
<td align="left" rowspan="1" colspan="1">Helicopter passing</td>
<td align="left" rowspan="1" colspan="1">2.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">River flowing</td>
<td align="left" rowspan="1" colspan="1">3.7</td>
<td align="left" rowspan="1" colspan="1">Machinery, chugging sounds</td>
<td align="left" rowspan="1" colspan="1">2.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Small brush fire</td>
<td align="left" rowspan="1" colspan="1">3.8</td>
<td align="left" rowspan="1" colspan="1">Printer, office</td>
<td align="left" rowspan="1" colspan="1">2.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Water waves coming ashore</td>
<td align="left" rowspan="1" colspan="1">3.8</td>
<td align="left" rowspan="1" colspan="1">Helicopter #2</td>
<td align="left" rowspan="1" colspan="1">2.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind blowing, cold</td>
<td align="left" rowspan="1" colspan="1">3.8</td>
<td align="left" rowspan="1" colspan="1">Office machine, handling paper</td>
<td align="left" rowspan="1" colspan="1">2.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Bubbling water in hot tub</td>
<td align="left" rowspan="1" colspan="1">3.8</td>
<td align="left" rowspan="1" colspan="1">Police car with siren passing by</td>
<td align="left" rowspan="1" colspan="1">2.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Rocks falling/sliding</td>
<td align="left" rowspan="1" colspan="1">3.8</td>
<td align="left" rowspan="1" colspan="1">Fax machine, paper coming out</td>
<td align="left" rowspan="1" colspan="1">2.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind gusting</td>
<td align="left" rowspan="1" colspan="1">3.9</td>
<td align="left" rowspan="1" colspan="1">Office printer, printing</td>
<td align="left" rowspan="1" colspan="1">2.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">River medium flow</td>
<td align="left" rowspan="1" colspan="1">3.9</td>
<td align="left" rowspan="1" colspan="1">Drying machine #2</td>
<td align="left" rowspan="1" colspan="1">2.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind blowing #2</td>
<td align="left" rowspan="1" colspan="1">3.9</td>
<td align="left" rowspan="1" colspan="1">Fireworks going off</td>
<td align="left" rowspan="1" colspan="1">2.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Water flow, stream</td>
<td align="left" rowspan="1" colspan="1">4.0</td>
<td align="left" rowspan="1" colspan="1">printer, feeding paper</td>
<td align="left" rowspan="1" colspan="1">2.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Rockslide</td>
<td align="left" rowspan="1" colspan="1">4.1</td>
<td align="left" rowspan="1" colspan="1">Airplane, propellar #2</td>
<td align="left" rowspan="1" colspan="1">2.6</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Rolling thunder</td>
<td align="left" rowspan="1" colspan="1">4.1</td>
<td align="left" rowspan="1" colspan="1">Air conditioner motor turning on</td>
<td align="left" rowspan="1" colspan="1">2.6</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Water dripping</td>
<td align="left" rowspan="1" colspan="1">4.1</td>
<td align="left" rowspan="1" colspan="1">Drying machine #1</td>
<td align="left" rowspan="1" colspan="1">2.6</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind blowing, low pitch</td>
<td align="left" rowspan="1" colspan="1">4.1</td>
<td align="left" rowspan="1" colspan="1">Puncher metal</td>
<td align="left" rowspan="1" colspan="1">2.7</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind blowing, quietly</td>
<td align="left" rowspan="1" colspan="1">4.1</td>
<td align="left" rowspan="1" colspan="1">Helicopter #3</td>
<td align="left" rowspan="1" colspan="1">2.7</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind, cold breeze #2</td>
<td align="left" rowspan="1" colspan="1">4.1</td>
<td align="left" rowspan="1" colspan="1">Conveyor belt moving</td>
<td align="left" rowspan="1" colspan="1">2.8</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Glacier break</td>
<td align="left" rowspan="1" colspan="1">4.2</td>
<td align="left" rowspan="1" colspan="1">Windshield wipers</td>
<td align="left" rowspan="1" colspan="1">2.8</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind blowing, high pitch</td>
<td align="left" rowspan="1" colspan="1">4.2</td>
<td align="left" rowspan="1" colspan="1">Airline fly by</td>
<td align="left" rowspan="1" colspan="1">2.9</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind blowing, whistling #2</td>
<td align="left" rowspan="1" colspan="1">4.2</td>
<td align="left" rowspan="1" colspan="1">Large newspaper print press, chugging</td>
<td align="left" rowspan="1" colspan="1">3.1</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Heavy wind through doorway</td>
<td align="left" rowspan="1" colspan="1">4.3</td>
<td align="left" rowspan="1" colspan="1">Mechanical conveyer moving</td>
<td align="left" rowspan="1" colspan="1">3.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Glacier break</td>
<td align="left" rowspan="1" colspan="1">4.3</td>
<td align="left" rowspan="1" colspan="1">Garage door opening 2</td>
<td align="left" rowspan="1" colspan="1">3.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Raining falling, with thunder</td>
<td align="left" rowspan="1" colspan="1">4.3</td>
<td align="left" rowspan="1" colspan="1">Pressbook, chugging sound</td>
<td align="left" rowspan="1" colspan="1">3.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind blowing #5</td>
<td align="left" rowspan="1" colspan="1">4.3</td>
<td align="left" rowspan="1" colspan="1">Train, freight passing</td>
<td align="left" rowspan="1" colspan="1">3.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind blowing #6</td>
<td align="left" rowspan="1" colspan="1">4.3</td>
<td align="left" rowspan="1" colspan="1">Garage door opening</td>
<td align="left" rowspan="1" colspan="1">3.5</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wind blowing #7</td>
<td align="left" rowspan="1" colspan="1">4.4</td>
<td align="left" rowspan="1" colspan="1">Train squeal breaks to a stop</td>
<td align="left" rowspan="1" colspan="1">3.5</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Avalanche</td>
<td align="left" rowspan="1" colspan="1">4.4</td>
<td align="left" rowspan="1" colspan="1">
<bold>Exhaust fan automatic turn on and blow</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>3.6</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Glacier break</td>
<td align="left" rowspan="1" colspan="1">4.4</td>
<td align="left" rowspan="1" colspan="1">
<bold>Industry, large press</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>3.6</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Ocean waves #1</td>
<td align="left" rowspan="1" colspan="1">4.5</td>
<td align="left" rowspan="1" colspan="1">
<bold>Jet airplane engine starting</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>3.6</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Ocean waves #2</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>4.5</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Heavy machine, quiet</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>3.7</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Wind blowing, gusty</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>4.5</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Train passing by</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>3.7</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Wind, fast, whispy</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>4.5</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Industry, flywheel</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>4.2</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Heavy rainstorm with thunder</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>4.6</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Industry, machinery</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>4.5</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Ocean waves #3</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>4.6</bold>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Rain, medium hard</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>4.6</bold>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Wind blowing, whistling #1</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>4.7</bold>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Bold text refers to extreme rated sounds used in Figure
<xref ref-type="fig" rid="F1">1B</xref>
.</p>
</table-wrap-foot>
</table-wrap>
</p>
</app>
</app-group>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
</list>
<tree>
<country name="États-Unis">
<noRegion>
<name sortKey="Lewis, James W" sort="Lewis, James W" uniqKey="Lewis J" first="James W." last="Lewis">James W. Lewis</name>
</noRegion>
<name sortKey="Frum, Chris A" sort="Frum, Chris A" uniqKey="Frum C" first="Chris A." last="Frum">Chris A. Frum</name>
<name sortKey="Frum, Chris A" sort="Frum, Chris A" uniqKey="Frum C" first="Chris A." last="Frum">Chris A. Frum</name>
<name sortKey="Frum, Chris A" sort="Frum, Chris A" uniqKey="Frum C" first="Chris A." last="Frum">Chris A. Frum</name>
<name sortKey="Lewis, James W" sort="Lewis, James W" uniqKey="Lewis J" first="James W." last="Lewis">James W. Lewis</name>
<name sortKey="Lewis, James W" sort="Lewis, James W" uniqKey="Lewis J" first="James W." last="Lewis">James W. Lewis</name>
<name sortKey="Talkington, William J" sort="Talkington, William J" uniqKey="Talkington W" first="William J." last="Talkington">William J. Talkington</name>
<name sortKey="Talkington, William J" sort="Talkington, William J" uniqKey="Talkington W" first="William J." last="Talkington">William J. Talkington</name>
<name sortKey="Talkington, William J" sort="Talkington, William J" uniqKey="Talkington W" first="William J." last="Talkington">William J. Talkington</name>
<name sortKey="Tallaksen, Katherine C" sort="Tallaksen, Katherine C" uniqKey="Tallaksen K" first="Katherine C." last="Tallaksen">Katherine C. Tallaksen</name>
<name sortKey="Tallaksen, Katherine C" sort="Tallaksen, Katherine C" uniqKey="Tallaksen K" first="Katherine C." last="Tallaksen">Katherine C. Tallaksen</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002026 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 002026 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3348722
   |texte=   Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:22582038" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024