Serveur d'exploration sur l'opéra

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Perception of Words and Pitch Patterns in Song and Speech

Identifieur interne : 000220 ( Pmc/Checkpoint ); précédent : 000219; suivant : 000221

Perception of Words and Pitch Patterns in Song and Speech

Auteurs : Julia Merrill [Allemagne] ; Daniela Sammler [Allemagne] ; Marc Bangert [Allemagne] ; Dirk Goldhahn [Allemagne] ; Gabriele Lohmann [Allemagne] ; Robert Turner [Allemagne] ; Angela D. Friederici [Allemagne]

Source :

RBID : PMC:3307374

Abstract

This functional magnetic resonance imaging study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words and pitch patterns. Univariate and multivariate analyses were performed to isolate the neural correlates of the word- and pitch-based discrimination between song and speech, corrected for rhythmic differences in both. Therefore, six conditions, arranged in a subtractive hierarchy were created: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; and as a control the pure musical or speech rhythm. Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG) and intraparietal sulcus (IPS) in processing song and speech. While the left IFG coded for spoken words and showed predominance over the right IFG in prosodic pitch processing, an opposite lateralization was found for pitch in song. The IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the IPS and the lateralized activity of the IFG.


Url:
DOI: 10.3389/fpsyg.2012.00076
PubMed: 22457659
PubMed Central: 3307374


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3307374

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Perception of Words and Pitch Patterns in Song and Speech</title>
<author>
<name sortKey="Merrill, Julia" sort="Merrill, Julia" uniqKey="Merrill J" first="Julia" last="Merrill">Julia Merrill</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sammler, Daniela" sort="Sammler, Daniela" uniqKey="Sammler D" first="Daniela" last="Sammler">Daniela Sammler</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Bangert, Marc" sort="Bangert, Marc" uniqKey="Bangert M" first="Marc" last="Bangert">Marc Bangert</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Institute of Musicians’ Medicine, Dresden University of Music</institution>
<country>Dresden, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Goldhahn, Dirk" sort="Goldhahn, Dirk" uniqKey="Goldhahn D" first="Dirk" last="Goldhahn">Dirk Goldhahn</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Lohmann, Gabriele" sort="Lohmann, Gabriele" uniqKey="Lohmann G" first="Gabriele" last="Lohmann">Gabriele Lohmann</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Turner, Robert" sort="Turner, Robert" uniqKey="Turner R" first="Robert" last="Turner">Robert Turner</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Friederici, Angela D" sort="Friederici, Angela D" uniqKey="Friederici A" first="Angela D." last="Friederici">Angela D. Friederici</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22457659</idno>
<idno type="pmc">3307374</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3307374</idno>
<idno type="RBID">PMC:3307374</idno>
<idno type="doi">10.3389/fpsyg.2012.00076</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">000E39</idno>
<idno type="wicri:Area/Pmc/Curation">000E39</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000220</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Perception of Words and Pitch Patterns in Song and Speech</title>
<author>
<name sortKey="Merrill, Julia" sort="Merrill, Julia" uniqKey="Merrill J" first="Julia" last="Merrill">Julia Merrill</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sammler, Daniela" sort="Sammler, Daniela" uniqKey="Sammler D" first="Daniela" last="Sammler">Daniela Sammler</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Bangert, Marc" sort="Bangert, Marc" uniqKey="Bangert M" first="Marc" last="Bangert">Marc Bangert</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Institute of Musicians’ Medicine, Dresden University of Music</institution>
<country>Dresden, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Goldhahn, Dirk" sort="Goldhahn, Dirk" uniqKey="Goldhahn D" first="Dirk" last="Goldhahn">Dirk Goldhahn</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Lohmann, Gabriele" sort="Lohmann, Gabriele" uniqKey="Lohmann G" first="Gabriele" last="Lohmann">Gabriele Lohmann</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Turner, Robert" sort="Turner, Robert" uniqKey="Turner R" first="Robert" last="Turner">Robert Turner</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Friederici, Angela D" sort="Friederici, Angela D" uniqKey="Friederici A" first="Angela D." last="Friederici">Angela D. Friederici</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="e-ISSN">1664-1078</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>This functional magnetic resonance imaging study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words and pitch patterns. Univariate and multivariate analyses were performed to isolate the neural correlates of the word- and pitch-based discrimination between song and speech, corrected for rhythmic differences in both. Therefore, six conditions, arranged in a subtractive hierarchy were created: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; and as a control the pure musical or speech rhythm. Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG) and intraparietal sulcus (IPS) in processing song and speech. While the left IFG coded for spoken words and showed predominance over the right IFG in prosodic pitch processing, an opposite lateralization was found for pitch in song. The IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the IPS and the lateralized activity of the IFG.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Abrams, D A" uniqKey="Abrams D">D. A. Abrams</name>
</author>
<author>
<name sortKey="Bhatara, A" uniqKey="Bhatara A">A. Bhatara</name>
</author>
<author>
<name sortKey="Ryali, S" uniqKey="Ryali S">S. Ryali</name>
</author>
<author>
<name sortKey="Balaban, E" uniqKey="Balaban E">E. Balaban</name>
</author>
<author>
<name sortKey="Levitin, D J" uniqKey="Levitin D">D. J. Levitin</name>
</author>
<author>
<name sortKey="Menon, V" uniqKey="Menon V">V. Menon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bode, S" uniqKey="Bode S">S. Bode</name>
</author>
<author>
<name sortKey="Bogler, C" uniqKey="Bogler C">C. Bogler</name>
</author>
<author>
<name sortKey="Soon, C S" uniqKey="Soon C">C. S. Soon</name>
</author>
<author>
<name sortKey="Haynes, J D" uniqKey="Haynes J">J. D. Haynes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boemio, A" uniqKey="Boemio A">A. Boemio</name>
</author>
<author>
<name sortKey="Fromm, S" uniqKey="Fromm S">S. Fromm</name>
</author>
<author>
<name sortKey="Braun, A" uniqKey="Braun A">A. Braun</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bogler, C" uniqKey="Bogler C">C. Bogler</name>
</author>
<author>
<name sortKey="Bode, S" uniqKey="Bode S">S. Bode</name>
</author>
<author>
<name sortKey="Haynes, J D" uniqKey="Haynes J">J. D. Haynes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bookheimer, S" uniqKey="Bookheimer S">S. Bookheimer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brown, S" uniqKey="Brown S">S. Brown</name>
</author>
<author>
<name sortKey="Martinez, M J" uniqKey="Martinez M">M. J. Martinez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brown, S" uniqKey="Brown S">S. Brown</name>
</author>
<author>
<name sortKey="Martinez, M J" uniqKey="Martinez M">M. J. Martinez</name>
</author>
<author>
<name sortKey="Parsons, L M" uniqKey="Parsons L">L. M. Parsons</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brown, S" uniqKey="Brown S">S. Brown</name>
</author>
<author>
<name sortKey="Ngan, E" uniqKey="Ngan E">E. Ngan</name>
</author>
<author>
<name sortKey="Liotti, M" uniqKey="Liotti M">M. Liotti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brown, S" uniqKey="Brown S">S. Brown</name>
</author>
<author>
<name sortKey="Weishaar, K" uniqKey="Weishaar K">K. Weishaar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Callan, D E" uniqKey="Callan D">D. E. Callan</name>
</author>
<author>
<name sortKey="Kawato, M" uniqKey="Kawato M">M. Kawato</name>
</author>
<author>
<name sortKey="Parsons, L" uniqKey="Parsons L">L. Parsons</name>
</author>
<author>
<name sortKey="Turner, R" uniqKey="Turner R">R. Turner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Callan, D E" uniqKey="Callan D">D. E. Callan</name>
</author>
<author>
<name sortKey="Tsytsarev, V" uniqKey="Tsytsarev V">V. Tsytsarev</name>
</author>
<author>
<name sortKey="Hanakawa, T" uniqKey="Hanakawa T">T. Hanakawa</name>
</author>
<author>
<name sortKey="Callan, A M" uniqKey="Callan A">A. M. Callan</name>
</author>
<author>
<name sortKey="Katsuhara, M" uniqKey="Katsuhara M">M. Katsuhara</name>
</author>
<author>
<name sortKey="Fukuyama, H" uniqKey="Fukuyama H">H. Fukuyama</name>
</author>
<author>
<name sortKey="Turner, R" uniqKey="Turner R">R. Turner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chang, E F" uniqKey="Chang E">E. F. Chang</name>
</author>
<author>
<name sortKey="Rieger, J W" uniqKey="Rieger J">J. W. Rieger</name>
</author>
<author>
<name sortKey="Johnson, K" uniqKey="Johnson K">K. Johnson</name>
</author>
<author>
<name sortKey="Berger, M S" uniqKey="Berger M">M. S. Berger</name>
</author>
<author>
<name sortKey="Barbaro, N M" uniqKey="Barbaro N">N. M. Barbaro</name>
</author>
<author>
<name sortKey="Knight, R T" uniqKey="Knight R">R. T. Knight</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eickhoff, S" uniqKey="Eickhoff S">S. Eickhoff</name>
</author>
<author>
<name sortKey="Stephan, K E" uniqKey="Stephan K">K. E. Stephan</name>
</author>
<author>
<name sortKey="Mohlberg, H" uniqKey="Mohlberg H">H. Mohlberg</name>
</author>
<author>
<name sortKey="Grefkes, C" uniqKey="Grefkes C">C. Grefkes</name>
</author>
<author>
<name sortKey="Fink, G R" uniqKey="Fink G">G. R. Fink</name>
</author>
<author>
<name sortKey="Amunts, K" uniqKey="Amunts K">K. Amunts</name>
</author>
<author>
<name sortKey="Zilles, K" uniqKey="Zilles K">K. Zilles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fitch, W T" uniqKey="Fitch W">W. T. Fitch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Foster, N" uniqKey="Foster N">N. Foster</name>
</author>
<author>
<name sortKey="Zatorre, R" uniqKey="Zatorre R">R. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gandour, J" uniqKey="Gandour J">J. Gandour</name>
</author>
<author>
<name sortKey="Dzemidzic, M" uniqKey="Dzemidzic M">M. Dzemidzic</name>
</author>
<author>
<name sortKey="Wong, D" uniqKey="Wong D">D. Wong</name>
</author>
<author>
<name sortKey="Lowe, M" uniqKey="Lowe M">M. Lowe</name>
</author>
<author>
<name sortKey="Tong, Y" uniqKey="Tong Y">Y. Tong</name>
</author>
<author>
<name sortKey="Hsieh, L" uniqKey="Hsieh L">L. Hsieh</name>
</author>
<author>
<name sortKey="Satthamnuwong, N" uniqKey="Satthamnuwong N">N. Satthamnuwong</name>
</author>
<author>
<name sortKey="Lurito, J" uniqKey="Lurito J">J. Lurito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gandour, J" uniqKey="Gandour J">J. Gandour</name>
</author>
<author>
<name sortKey="Tong, Y" uniqKey="Tong Y">Y. Tong</name>
</author>
<author>
<name sortKey="Wong, D" uniqKey="Wong D">D. Wong</name>
</author>
<author>
<name sortKey="Talavage, T" uniqKey="Talavage T">T. Talavage</name>
</author>
<author>
<name sortKey="Dzemidzic, M" uniqKey="Dzemidzic M">M. Dzemidzic</name>
</author>
<author>
<name sortKey="Xu, Y" uniqKey="Xu Y">Y. Xu</name>
</author>
<author>
<name sortKey="Li, X" uniqKey="Li X">X. Li</name>
</author>
<author>
<name sortKey="Lowew, M" uniqKey="Lowew M">M. Lowew</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gandour, J" uniqKey="Gandour J">J. Gandour</name>
</author>
<author>
<name sortKey="Wong, D" uniqKey="Wong D">D. Wong</name>
</author>
<author>
<name sortKey="Hsieh, L" uniqKey="Hsieh L">L. Hsieh</name>
</author>
<author>
<name sortKey="Weinzapfel, B" uniqKey="Weinzapfel B">B. Weinzapfel</name>
</author>
<author>
<name sortKey="Van Lancker, D" uniqKey="Van Lancker D">D. Van Lancker</name>
</author>
<author>
<name sortKey="Hutchins, G D" uniqKey="Hutchins G">G. D. Hutchins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Giraud, A L" uniqKey="Giraud A">A. L. Giraud</name>
</author>
<author>
<name sortKey="Kell, C" uniqKey="Kell C">C. Kell</name>
</author>
<author>
<name sortKey="Thierfelder, C" uniqKey="Thierfelder C">C. Thierfelder</name>
</author>
<author>
<name sortKey="Sterzer, P" uniqKey="Sterzer P">P. Sterzer</name>
</author>
<author>
<name sortKey="Russ, M O" uniqKey="Russ M">M. O. Russ</name>
</author>
<author>
<name sortKey="Preibisch, C" uniqKey="Preibisch C">C. Preibisch</name>
</author>
<author>
<name sortKey="Kleinschmidt, A" uniqKey="Kleinschmidt A">A. Kleinschmidt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grefkes, C" uniqKey="Grefkes C">C. Grefkes</name>
</author>
<author>
<name sortKey="Fink, G R" uniqKey="Fink G">G. R. Fink</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Griffiths, T" uniqKey="Griffiths T">T. Griffiths</name>
</author>
<author>
<name sortKey="Warren, J D" uniqKey="Warren J">J. D. Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gunji, A" uniqKey="Gunji A">A. Gunji</name>
</author>
<author>
<name sortKey="Ishii, R" uniqKey="Ishii R">R. Ishii</name>
</author>
<author>
<name sortKey="Chau, W" uniqKey="Chau W">W. Chau</name>
</author>
<author>
<name sortKey="Kakigi, R" uniqKey="Kakigi R">R. Kakigi</name>
</author>
<author>
<name sortKey="Pantev, C" uniqKey="Pantev C">C. Pantev</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Halpern, A R" uniqKey="Halpern A">A. R. Halpern</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hanke, M" uniqKey="Hanke M">M. Hanke</name>
</author>
<author>
<name sortKey="Halchenko, Y O" uniqKey="Halchenko Y">Y. O. Halchenko</name>
</author>
<author>
<name sortKey="Sederberg, P B" uniqKey="Sederberg P">P. B. Sederberg</name>
</author>
<author>
<name sortKey="Hanson, S J" uniqKey="Hanson S">S. J. Hanson</name>
</author>
<author>
<name sortKey="Haxby, J V" uniqKey="Haxby J">J. V. Haxby</name>
</author>
<author>
<name sortKey="Pollmann, S" uniqKey="Pollmann S">S. Pollmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haxby, J" uniqKey="Haxby J">J. Haxby</name>
</author>
<author>
<name sortKey="Gobbini, M" uniqKey="Gobbini M">M. Gobbini</name>
</author>
<author>
<name sortKey="Furey, M" uniqKey="Furey M">M. Furey</name>
</author>
<author>
<name sortKey="Ishai, A" uniqKey="Ishai A">A. Ishai</name>
</author>
<author>
<name sortKey="Schouten, J" uniqKey="Schouten J">J. Schouten</name>
</author>
<author>
<name sortKey="Pietrini, P" uniqKey="Pietrini P">P. Pietrini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haynes, J D" uniqKey="Haynes J">J. D. Haynes</name>
</author>
<author>
<name sortKey="Rees, G" uniqKey="Rees G">G. Rees</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haynes, J D" uniqKey="Haynes J">J. D. Haynes</name>
</author>
<author>
<name sortKey="Rees, G" uniqKey="Rees G">G. Rees</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
<author>
<name sortKey="Buchsbaum, B" uniqKey="Buchsbaum B">B. Buchsbaum</name>
</author>
<author>
<name sortKey="Humphries, C" uniqKey="Humphries C">C. Humphries</name>
</author>
<author>
<name sortKey="Muftuler, T" uniqKey="Muftuler T">T. Muftuler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hsieh, L" uniqKey="Hsieh L">L. Hsieh</name>
</author>
<author>
<name sortKey="Gandour, J" uniqKey="Gandour J">J. Gandour</name>
</author>
<author>
<name sortKey="Wong, D" uniqKey="Wong D">D. Wong</name>
</author>
<author>
<name sortKey="Hutchins, G D" uniqKey="Hutchins G">G. D. Hutchins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Husain, M" uniqKey="Husain M">M. Husain</name>
</author>
<author>
<name sortKey="Nachev, P" uniqKey="Nachev P">P. Nachev</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jamison, H L" uniqKey="Jamison H">H. L. Jamison</name>
</author>
<author>
<name sortKey="Watkins, K E" uniqKey="Watkins K">K. E. Watkins</name>
</author>
<author>
<name sortKey="Bishop, D V M" uniqKey="Bishop D">D. V. M. Bishop</name>
</author>
<author>
<name sortKey="Matthews, P M" uniqKey="Matthews P">P. M. Matthews</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jeffries, K J" uniqKey="Jeffries K">K. J. Jeffries</name>
</author>
<author>
<name sortKey="Fritz, J B" uniqKey="Fritz J">J. B. Fritz</name>
</author>
<author>
<name sortKey="Braun, A R" uniqKey="Braun A">A. R. Braun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kahnt, T" uniqKey="Kahnt T">T. Kahnt</name>
</author>
<author>
<name sortKey="Grueschow, O" uniqKey="Grueschow O">O. Grueschow</name>
</author>
<author>
<name sortKey="Speck, O" uniqKey="Speck O">O. Speck</name>
</author>
<author>
<name sortKey="Haynes, J D" uniqKey="Haynes J">J. D. Haynes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kleber, B" uniqKey="Kleber B">B. Kleber</name>
</author>
<author>
<name sortKey="Birbaumer, N" uniqKey="Birbaumer N">N. Birbaumer</name>
</author>
<author>
<name sortKey="Veit, R" uniqKey="Veit R">R. Veit</name>
</author>
<author>
<name sortKey="Trevorrow, T" uniqKey="Trevorrow T">T. Trevorrow</name>
</author>
<author>
<name sortKey="Lotze, M" uniqKey="Lotze M">M. Lotze</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kleber, B" uniqKey="Kleber B">B. Kleber</name>
</author>
<author>
<name sortKey="Veit, R" uniqKey="Veit R">R. Veit</name>
</author>
<author>
<name sortKey="Birbaumer, N" uniqKey="Birbaumer N">N. Birbaumer</name>
</author>
<author>
<name sortKey="Gruzelier, J" uniqKey="Gruzelier J">J. Gruzelier</name>
</author>
<author>
<name sortKey="Lotze, M" uniqKey="Lotze M">M. Lotze</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klein, J C" uniqKey="Klein J">J. C. Klein</name>
</author>
<author>
<name sortKey="Zatorre, R" uniqKey="Zatorre R">R. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klein, J C" uniqKey="Klein J">J. C. Klein</name>
</author>
<author>
<name sortKey="Zatorre, R" uniqKey="Zatorre R">R. Zatorre</name>
</author>
<author>
<name sortKey="Milner, B" uniqKey="Milner B">B. Milner</name>
</author>
<author>
<name sortKey="Zhao, V" uniqKey="Zhao V">V. Zhao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Siebel, W A" uniqKey="Siebel W">W. A. Siebel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kotz, S A" uniqKey="Kotz S">S. A. Kotz</name>
</author>
<author>
<name sortKey="Meyer, M" uniqKey="Meyer M">M. Meyer</name>
</author>
<author>
<name sortKey="Alter, K" uniqKey="Alter K">K. Alter</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
<author>
<name sortKey="Von Cramon, Y D" uniqKey="Von Cramon Y">Y. D. von Cramon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kriegeskorte, N" uniqKey="Kriegeskorte N">N. Kriegeskorte</name>
</author>
<author>
<name sortKey="Goebel, R" uniqKey="Goebel R">R. Goebel</name>
</author>
<author>
<name sortKey="Bandettini, P" uniqKey="Bandettini P">P. Bandettini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lerdahl, F" uniqKey="Lerdahl F">F. Lerdahl</name>
</author>
<author>
<name sortKey="Jackendoff, R" uniqKey="Jackendoff R">R. Jackendoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meyer, M" uniqKey="Meyer M">M. Meyer</name>
</author>
<author>
<name sortKey="Alter, K" uniqKey="Alter K">K. Alter</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
<author>
<name sortKey="Lohmann, G" uniqKey="Lohmann G">G. Lohmann</name>
</author>
<author>
<name sortKey="Von Cramon, D Y" uniqKey="Von Cramon D">D. Y. von Cramon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meyer, M" uniqKey="Meyer M">M. Meyer</name>
</author>
<author>
<name sortKey="Steinhauer, K" uniqKey="Steinhauer K">K. Steinhauer</name>
</author>
<author>
<name sortKey="Alter, K" uniqKey="Alter K">K. Alter</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
<author>
<name sortKey="Von Cramon, D Y" uniqKey="Von Cramon D">D. Y. von Cramon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nichols, T" uniqKey="Nichols T">T. Nichols</name>
</author>
<author>
<name sortKey="Brett, M" uniqKey="Brett M">M. Brett</name>
</author>
<author>
<name sortKey="Andersson, J" uniqKey="Andersson J">J. Andersson</name>
</author>
<author>
<name sortKey="Wager, T" uniqKey="Wager T">T. Wager</name>
</author>
<author>
<name sortKey="Poline, J B" uniqKey="Poline J">J.-B. Poline</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Norman, K A" uniqKey="Norman K">K. A. Norman</name>
</author>
<author>
<name sortKey="Polyn, S M" uniqKey="Polyn S">S. M. Polyn</name>
</author>
<author>
<name sortKey="Detre, G J" uniqKey="Detre G">G. J. Detre</name>
</author>
<author>
<name sortKey="Haxby, J V" uniqKey="Haxby J">J. V. Haxby</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Obleser, J" uniqKey="Obleser J">J. Obleser</name>
</author>
<author>
<name sortKey="Eisner, F" uniqKey="Eisner F">F. Eisner</name>
</author>
<author>
<name sortKey="Kotz, S" uniqKey="Kotz S">S. Kotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Okada, K" uniqKey="Okada K">K. Okada</name>
</author>
<author>
<name sortKey="Rong, F" uniqKey="Rong F">F. Rong</name>
</author>
<author>
<name sortKey="Venezia, J" uniqKey="Venezia J">J. Venezia</name>
</author>
<author>
<name sortKey="Matchin, W" uniqKey="Matchin W">W. Matchin</name>
</author>
<author>
<name sortKey="Hsieh, I" uniqKey="Hsieh I">I. Hsieh</name>
</author>
<author>
<name sortKey="Saberi, K" uniqKey="Saberi K">K. Saberi</name>
</author>
<author>
<name sortKey="Serences, J T" uniqKey="Serences J">J. T. Serences</name>
</author>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ozdemir, E" uniqKey="Ozdemir E">E. Özdemir</name>
</author>
<author>
<name sortKey="Norton, A" uniqKey="Norton A">A. Norton</name>
</author>
<author>
<name sortKey="Schlaug, G" uniqKey="Schlaug G">G. Schlaug</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patterson, R D" uniqKey="Patterson R">R. D. Patterson</name>
</author>
<author>
<name sortKey="Uppenkamp, S" uniqKey="Uppenkamp S">S. Uppenkamp</name>
</author>
<author>
<name sortKey="Johnsrude, I S" uniqKey="Johnsrude I">I. S. Johnsrude</name>
</author>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perry, D W" uniqKey="Perry D">D. W. Perry</name>
</author>
<author>
<name sortKey="Zatorre" uniqKey="Zatorre">Zatorre</name>
</author>
<author>
<name sortKey="Robert, J" uniqKey="Robert J">J. Robert</name>
</author>
<author>
<name sortKey="Petrides, M" uniqKey="Petrides M">M. Petrides</name>
</author>
<author>
<name sortKey="Alivisatos, B" uniqKey="Alivisatos B">B. Alivisatos</name>
</author>
<author>
<name sortKey="Meyer, E" uniqKey="Meyer E">E. Meyer</name>
</author>
<author>
<name sortKey="Evans, A C" uniqKey="Evans A">A. C. Evans</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Petacchi, A" uniqKey="Petacchi A">A. Petacchi</name>
</author>
<author>
<name sortKey="Laird, A R" uniqKey="Laird A">A. R. Laird</name>
</author>
<author>
<name sortKey="Fox, P T" uniqKey="Fox P">P. T. Fox</name>
</author>
<author>
<name sortKey="Bower, J M" uniqKey="Bower J">J. M. Bower</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Plante, E" uniqKey="Plante E">E. Plante</name>
</author>
<author>
<name sortKey="Creusere, M" uniqKey="Creusere M">M. Creusere</name>
</author>
<author>
<name sortKey="Sabin, C" uniqKey="Sabin C">C. Sabin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
<author>
<name sortKey="Guillemin, A" uniqKey="Guillemin A">A. Guillemin</name>
</author>
<author>
<name sortKey="Thompson, J" uniqKey="Thompson J">J. Thompson</name>
</author>
<author>
<name sortKey="Fritz, J" uniqKey="Fritz J">J. Fritz</name>
</author>
<author>
<name sortKey="Bavelier, D" uniqKey="Bavelier D">D. Bavelier</name>
</author>
<author>
<name sortKey="Braun, A R" uniqKey="Braun A">A. R. Braun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Riecker, A" uniqKey="Riecker A">A. Riecker</name>
</author>
<author>
<name sortKey="Wildgruber, D" uniqKey="Wildgruber D">D. Wildgruber</name>
</author>
<author>
<name sortKey="Dogil, G" uniqKey="Dogil G">G. Dogil</name>
</author>
<author>
<name sortKey="Grodd, W" uniqKey="Grodd W">W. Grodd</name>
</author>
<author>
<name sortKey="Ackermann, H" uniqKey="Ackermann H">H. Ackermann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saito, Y" uniqKey="Saito Y">Y. Saito</name>
</author>
<author>
<name sortKey="Ishii, K" uniqKey="Ishii K">K. Ishii</name>
</author>
<author>
<name sortKey="Yagi, K" uniqKey="Yagi K">K. Yagi</name>
</author>
<author>
<name sortKey="Tatsumi, I F" uniqKey="Tatsumi I">I. F. Tatsumi</name>
</author>
<author>
<name sortKey="Mizusawa, H" uniqKey="Mizusawa H">H. Mizusawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D. Sammler</name>
</author>
<author>
<name sortKey="Baird, A" uniqKey="Baird A">A. Baird</name>
</author>
<author>
<name sortKey="Valabreque, R" uniqKey="Valabreque R">R. Valabreque</name>
</author>
<author>
<name sortKey="Clement, S" uniqKey="Clement S">S. Clement</name>
</author>
<author>
<name sortKey="Dupont, S" uniqKey="Dupont S">S. Dupont</name>
</author>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P. Belin</name>
</author>
<author>
<name sortKey="Samson, S" uniqKey="Samson S">S. Samson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Samson, F" uniqKey="Samson F">F. Samson</name>
</author>
<author>
<name sortKey="Zeffiero, T A" uniqKey="Zeffiero T">T. A. Zeffiero</name>
</author>
<author>
<name sortKey="Toussaint, A" uniqKey="Toussaint A">A. Toussaint</name>
</author>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P. Belin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schmahmann, J D" uniqKey="Schmahmann J">J. D. Schmahmann</name>
</author>
<author>
<name sortKey="Doyon, J" uniqKey="Doyon J">J. Doyon</name>
</author>
<author>
<name sortKey="Toga, A W" uniqKey="Toga A">A. W. Toga</name>
</author>
<author>
<name sortKey="Petrides, M" uniqKey="Petrides M">M. Petrides</name>
</author>
<author>
<name sortKey="Evans, A C" uniqKey="Evans A">A. C. Evans</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schmithorst, V J" uniqKey="Schmithorst V">V. J. Schmithorst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D. Schön</name>
</author>
<author>
<name sortKey="Gordon, R" uniqKey="Gordon R">R. Gordon</name>
</author>
<author>
<name sortKey="Campagne, A" uniqKey="Campagne A">A. Campagne</name>
</author>
<author>
<name sortKey="Magne, C" uniqKey="Magne C">C. Magne</name>
</author>
<author>
<name sortKey="Astesano, C J L A" uniqKey="Astesano C">C. J.-L. A. Astesano</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schonwiesner, M" uniqKey="Schonwiesner M">M. Schönwiesner</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seidner, W" uniqKey="Seidner W">W. Seidner</name>
</author>
<author>
<name sortKey="Wendler, J" uniqKey="Wendler J">J. Wendler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stoodley, C J" uniqKey="Stoodley C">C. J. Stoodley</name>
</author>
<author>
<name sortKey="Schmahmann, J D" uniqKey="Schmahmann J">J. D. Schmahmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sundberg, J" uniqKey="Sundberg J">J. Sundberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Escoffier, N" uniqKey="Escoffier N">N. Escoffier</name>
</author>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E. Bigand</name>
</author>
<author>
<name sortKey="Lalitte, P" uniqKey="Lalitte P">P. Lalitte</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
<author>
<name sortKey="Von Cramon, D Y" uniqKey="Von Cramon D">D. Y. von Cramon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tusche, A" uniqKey="Tusche A">A. Tusche</name>
</author>
<author>
<name sortKey="Bode, S" uniqKey="Bode S">S. Bode</name>
</author>
<author>
<name sortKey="Haynes, J D" uniqKey="Haynes J">J. D. Haynes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, J D" uniqKey="Warren J">J. D. Warren</name>
</author>
<author>
<name sortKey="Jennings, A R" uniqKey="Jennings A">A. R. Jennings</name>
</author>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Westbury, C F" uniqKey="Westbury C">C. F. Westbury</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Evans, A C" uniqKey="Evans A">A. C. Evans</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wildgruber, D" uniqKey="Wildgruber D">D. Wildgruber</name>
</author>
<author>
<name sortKey="Ackermann, H" uniqKey="Ackermann H">H. Ackermann</name>
</author>
<author>
<name sortKey="Klose, U" uniqKey="Klose U">U. Klose</name>
</author>
<author>
<name sortKey="Kardatzki, B" uniqKey="Kardatzki B">B. Kardatzki</name>
</author>
<author>
<name sortKey="Grodd, W" uniqKey="Grodd W">W. Grodd</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wildgruber, D" uniqKey="Wildgruber D">D. Wildgruber</name>
</author>
<author>
<name sortKey="Pihan, H" uniqKey="Pihan H">H. Pihan</name>
</author>
<author>
<name sortKey="Ackermann, M" uniqKey="Ackermann M">M. Ackermann</name>
</author>
<author>
<name sortKey="Erb, M" uniqKey="Erb M">M. Erb</name>
</author>
<author>
<name sortKey="Grodd, W" uniqKey="Grodd W">W. Grodd</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilson, S M" uniqKey="Wilson S">S. M. Wilson</name>
</author>
<author>
<name sortKey="Saygin, A P" uniqKey="Saygin A">A. P. Saygin</name>
</author>
<author>
<name sortKey="Sereno, M" uniqKey="Sereno M">M. Sereno</name>
</author>
<author>
<name sortKey="Iacoboni, M" uniqKey="Iacoboni M">M. Iacoboni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R" uniqKey="Zatorre R">R. Zatorre</name>
</author>
<author>
<name sortKey="Evans, A C" uniqKey="Evans A">A. C. Evans</name>
</author>
<author>
<name sortKey="Meyer, E" uniqKey="Meyer E">E. Meyer</name>
</author>
<author>
<name sortKey="Gjedde, A" uniqKey="Gjedde A">A. Gjedde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R" uniqKey="Zatorre R">R. Zatorre</name>
</author>
<author>
<name sortKey="Halpern, A R" uniqKey="Halpern A">A. R. Halpern</name>
</author>
<author>
<name sortKey="Bouffard, M" uniqKey="Bouffard M">M. Bouffard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P. Belin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P. Belin</name>
</author>
<author>
<name sortKey="Penhune, V B" uniqKey="Penhune V">V. B. Penhune</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Evans, A C" uniqKey="Evans A">A. C. Evans</name>
</author>
<author>
<name sortKey="Meyer, E" uniqKey="Meyer E">E. Meyer</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychology</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Research Foundation</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22457659</article-id>
<article-id pub-id-type="pmc">3307374</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2012.00076</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Perception of Words and Pitch Patterns in Song and Speech</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Merrill</surname>
<given-names>Julia</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sammler</surname>
<given-names>Daniela</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Bangert</surname>
<given-names>Marc</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Goldhahn</surname>
<given-names>Dirk</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Lohmann</surname>
<given-names>Gabriele</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Turner</surname>
<given-names>Robert</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Friederici</surname>
<given-names>Angela D.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">*</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>
<country>Leipzig, Germany</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Institute of Musicians’ Medicine, Dresden University of Music</institution>
<country>Dresden, Germany</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Pascal Belin, University of Glasgow, UK</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Matthew H. Davis, MRC Cognition and Brain Sciences Unit, UK; Attila Andics, Semmelweis University, Hungary; Giancarlo Valente, Maastricht University, Netherlands</p>
</fn>
<corresp id="fn001">*Correspondence: Angela D. Friederici, Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany. e-mail:
<email>angelafr@cbs.mpg.de</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Frontiers in Auditory Cognitive Neuroscience, a specialty of Frontiers in Psychology.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>19</day>
<month>3</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<volume>3</volume>
<elocation-id>76</elocation-id>
<history>
<date date-type="received">
<day>11</day>
<month>11</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>01</day>
<month>3</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2012 Merrill, Sammler, Bangert, Goldhahn, Lohmann, Turner and Friederici.</copyright-statement>
<copyright-year>2012</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement">
<license-p>This is an open-access article distributed under the terms of the
<uri xlink:type="simple" xlink:href="http://creativecommons.org/licenses/by-nc/3.0/">Creative Commons Attribution Non Commercial License</uri>
, which permits non-commercial use, distribution, and reproduction in other forums, provided the original authors and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>This functional magnetic resonance imaging study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words and pitch patterns. Univariate and multivariate analyses were performed to isolate the neural correlates of the word- and pitch-based discrimination between song and speech, corrected for rhythmic differences in both. Therefore, six conditions, arranged in a subtractive hierarchy were created: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; and as a control the pure musical or speech rhythm. Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG) and intraparietal sulcus (IPS) in processing song and speech. While the left IFG coded for spoken words and showed predominance over the right IFG in prosodic pitch processing, an opposite lateralization was found for pitch in song. The IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the IPS and the lateralized activity of the IFG.</p>
</abstract>
<kwd-group>
<kwd>song</kwd>
<kwd>speech</kwd>
<kwd>prosody</kwd>
<kwd>melody</kwd>
<kwd>pitch</kwd>
<kwd>words</kwd>
<kwd>fMRI</kwd>
<kwd>MVPA</kwd>
</kwd-group>
<counts>
<fig-count count="4"></fig-count>
<table-count count="3"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="80"></ref-count>
<page-count count="13"></page-count>
<word-count count="10404"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>Introduction</title>
<p>Nobody would ever confuse a dialog and an aria in an opera such as Mozart’s “The Magic Flute,” just as everybody would be able to tell whether the lyrics of the national anthem were spoken or sung. What makes the difference between song and speech, and how do our brains code for it?</p>
<p>Song and speech are multi-faceted stimuli which are similar and at the same time different in many features. For example, both sung and spoken utterances express meaning through words and thus share the phonology, phonotactics, syntax, and semantics of the communicated language (Brown et al.,
<xref ref-type="bibr" rid="B7">2006</xref>
). However, words in sung and spoken language exhibit important differences in fine-grained acoustic aspects: articulation of the same words is often more precise and vowel duration considerably longer in sung compared to spoken language (Seidner and Wendler,
<xref ref-type="bibr" rid="B65">1978</xref>
). Furthermore, the formant structure of the vowels is often modified by singing style and technique, as for example reflected in a Singer’s Formant in professional singing (Sundberg,
<xref ref-type="bibr" rid="B67">1970</xref>
).</p>
<p>Both song and speech have an underlying melody or pitch pattern, but these vary in some detail. Song melody depends on the rule-based (syntactic) arrangement of 11 discrete pitches per octave into scales as described by music theory (cf. Lerdahl and Jackendoff,
<xref ref-type="bibr" rid="B43">1983</xref>
). The melody underlying a spoken utterance is called prosody and may indicate a speaker’s emotional state (emotional prosody), determine the category of sentences such as question or statement and aid language comprehension in terms of accentuation and boundary marking (linguistic prosody). In contrast to a sung melody, a natural spoken utterance carries a pattern of gliding, not discrete, pitches that are not related to scales but vary rather continuously (for an overview, see Patel,
<xref ref-type="bibr" rid="B51">2008</xref>
).</p>
<p>Altogether, song and speech, although similar in many aspects, differ in a number of acoustic parameters that our brains may capture and analyze to determine whether a stimulus is sung or spoken. The present study sets out to explore the neurocognitive architecture underlying the perception of song and speech at the level of their underlying constituents – words and pitch patterns.</p>
<p>Previous functional magnetic resonance imaging (fMRI) studies on the neural correlates of singing and speaking focused predominantly on differences between song and speech production (overt, covert, or imagined; Wildgruber et al.,
<xref ref-type="bibr" rid="B72">1996</xref>
; Riecker et al.,
<xref ref-type="bibr" rid="B57">2002</xref>
; Jeffries et al.,
<xref ref-type="bibr" rid="B34">2003</xref>
; Özdemir et al.,
<xref ref-type="bibr" rid="B50">2006</xref>
; Gunji et al.,
<xref ref-type="bibr" rid="B24">2007</xref>
) or compared production with perception (Callan et al.,
<xref ref-type="bibr" rid="B11">2006</xref>
; Saito et al.,
<xref ref-type="bibr" rid="B58">2006</xref>
) whereas pure perception studies are rare (Sammler et al.,
<xref ref-type="bibr" rid="B59">2010</xref>
; Schön et al.,
<xref ref-type="bibr" rid="B63">2010</xref>
). Two main experimental approaches have been used in this field: either syllable singing of folksongs or known instrumental music was contrasted with the recitation of highly automated word strings (e.g., names of the months; Wildgruber et al.,
<xref ref-type="bibr" rid="B72">1996</xref>
; Riecker et al.,
<xref ref-type="bibr" rid="B57">2002</xref>
), or well-known sung folksongs were contrasted with the spoken lyrics of the same song (Jeffries et al.,
<xref ref-type="bibr" rid="B34">2003</xref>
; Callan et al.,
<xref ref-type="bibr" rid="B11">2006</xref>
; Saito et al.,
<xref ref-type="bibr" rid="B58">2006</xref>
; Gunji et al.,
<xref ref-type="bibr" rid="B24">2007</xref>
).</p>
<p>Despite their above mentioned methodological diversity, most of the production as well as perception studies report a general lateralization effect for speech to the left and for song to the right hemisphere. For example, Callan et al. (
<xref ref-type="bibr" rid="B11">2006</xref>
) compared listening to sung (SNG) and spoken (SPK) versions of well-known Japanese songs and found significantly stronger activation of the right anterior superior temporal gyrus (STG) for SNG and a more strongly left-lateralized activity pattern for SPK. These findings led the authors to suggest that the right or left lateralization could act as a neural determiner for melody or speech processing, respectively. Schön et al. (
<xref ref-type="bibr" rid="B63">2010</xref>
) extended this view by suggesting that within song, linguistic (i.e., words), and musical (i.e., pitch) parameters show a differential hemispheric specialization. Their participants listened to pairs of spoken words, sung words, and “vocalize” (i.e., singing on a syllable) while performing a same/different task. Brain activation patterns related to the processing of musical aspects of song isolated by contrasting the sung vs. spoken words showed more extended activations in the right temporal lobe, whereas the processing of linguistic aspects (such as phonology, syntax, and semantics) determined by contrasting song vs. vocalize showed a predominance in the left temporal lobe.</p>
<p>Thus, both production and perception data seem to suggest a predominant role of the right hemisphere in the processing of song due to pronounced musical features of the stimulus and a stronger left hemisphere involvement in speech due to focused linguistic processing. Notably, the most recent studies (Callan et al.,
<xref ref-type="bibr" rid="B11">2006</xref>
; Schön et al.,
<xref ref-type="bibr" rid="B63">2010</xref>
) allude to the possibility that different aspects of spoken and sung language lead to different lateralization patterns, calling for an experiment that carefully dissects these aspects in order to draw a conclusive picture on the neural distinction of song and speech perception.</p>
<p>Due to a restricted number and the particular choice of experimental conditions, previous studies (Callan et al.,
<xref ref-type="bibr" rid="B11">2006</xref>
; Schön et al.,
<xref ref-type="bibr" rid="B63">2010</xref>
) did not allow for fully separating out the influence of words, pitch patterns, or other (uncontrolled) acoustic parameters on the differential coding for sung and spoken language in the brain.</p>
<p>Particularly, when it comes to the comparison of pitch patterns between song and speech, it must be taken into account that the melodies in song and speech (most obvious when they are produced on sentence level) do not only differ in their pitch contour, but have also different underlying rhythm patterns. Rhythm differences in song and speech concern mainly the periodicity, i.e., the metric conception. Meter describes the grouping of beats and their accentuation. Temporal periodicity in musical meter is much stricter than in speech and the regular periodicities of music allow meter to serve as a mental framework for sound perception. As pointed out by Patel (2008, p. 194) “there is no evidence that speech has a regular beat, or has meter in the sense of multiple periodicities.” Brown and Weishaar (
<xref ref-type="bibr" rid="B9">2010</xref>
) described the differences in terms of a “metric conception” for song as opposed to a “heterometric conception” for speech.</p>
<p>Consequently, the influence of the differential rhythm patterns must be parceled out (for example by adding a respective control condition) in order to draw firm conclusions on melody and prosody processing – which has not been done so far. This is also of specific relevance because the left and right hemispheres are known to have a relative preference for temporal (rhythm) and spectral (pitch) information, respectively (Zatorre,
<xref ref-type="bibr" rid="B77">2001</xref>
; Jamison et al.,
<xref ref-type="bibr" rid="B33">2006</xref>
; Obleser et al.,
<xref ref-type="bibr" rid="B48">2008</xref>
).</p>
<p>Furthermore, the methodological approaches of the reported fMRI studies were limited to univariate analyses (UVA), which mostly subtract two conditions and provide information about which extended brain regions have a greater mean magnitude of activation for one stimulus relative to another. This activation based method relies on the assumption that a functional region extends over a number of voxels and usually applies spatial smoothing to increase statistical power (Haynes and Rees,
<xref ref-type="bibr" rid="B28">2005</xref>
,
<xref ref-type="bibr" rid="B29">2006</xref>
; Kriegeskorte et al.,
<xref ref-type="bibr" rid="B42">2006</xref>
).</p>
<p>Recent methodological developments in neuroimaging have brought up multivariate pattern analysis (MVPA; Haxby et al.,
<xref ref-type="bibr" rid="B27">2001</xref>
; Norman et al.,
<xref ref-type="bibr" rid="B47">2006</xref>
) which does not only take into account activation differences in single voxels, but analyses the information present in multiple voxels. In addition to regions that react more strongly to one condition than another, as in UVA, MVPA can thus also identify brain areas in which a fine spatial pattern of activation of several voxels discriminates between experimental conditions (Kriegeskorte et al.,
<xref ref-type="bibr" rid="B42">2006</xref>
). Notably, this allows identifying the differential involvement of the same brain area in two conditions that would be cancelled out in conventional univariate subtraction methods (Okada et al.,
<xref ref-type="bibr" rid="B49">2010</xref>
).</p>
<p>Univariate analyses and MVPA approaches complement each other in that weak extended activation differences will be boosted by the spatial smoothing employed by the UVA, whereas the MVPA will highlight non-directional differential activation patterns between two conditions. Consequently, the combination of the two methods should define neural networks in a more complete way than each of these methods alone. Note that a considerable overlap of the UVA and MVPA results is not unusual given that the similarity or difference of activation patterns is partly also determined by their spatial average activity level (for studies that explicitly isolate and compare multivariate and univariate contributions to functional brain mapping, see Kriegeskorte et al.,
<xref ref-type="bibr" rid="B42">2006</xref>
; Okada et al.,
<xref ref-type="bibr" rid="B49">2010</xref>
; Abrams et al.,
<xref ref-type="bibr" rid="B1">2011</xref>
).</p>
<p>The present study used UVA as well as MVPA in a hierarchical paradigm to isolate the neural correlates of the word- and pitch-based discrimination between song and speech, corrected for the rhythmic differences mentioned above. Song and speech stimuli were constructed such to contain first all the three features (words, pitch and rhythm) of a full sung and spoken sentence, second only the pitch and rhythm patterns, and third, as a control for pitch processing, only the rhythm (see Figure
<xref ref-type="fig" rid="F1">1</xref>
). To assure maximal comparability, these three levels were derived from one another, spoken and sung material was kept parallel, task demands were kept as minimal as possible, and the study focused purely on perception. The hierarchical structure of the paradigm allowed us to (i) subtract each level from the above one to obtain brain areas only involved in word (first minus second level) and pitch (second minus third level) processing in either song and speech and (ii) compare these activation patterns.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>(A)</bold>
Experimental design. Six conditions in a subtractive hierarchy on three levels: first level: SPKwpr and SNGwpr (containing words, pitch pattern, and rhythm), second level: SPKpr and SNGpr (containing pitch pattern and rhythm), third level: SPKr and SNGr (containing rhythm).
<bold>(B)</bold>
Stimulus example.
<bold>(C)</bold>
Timeline of passive listening trial and task trial.</p>
</caption>
<graphic xlink:href="fpsyg-03-00076-g001"></graphic>
</fig>
<p>We hypothesized first that words (or text and lyrics) in both song and speech may recruit left frontal and temporal regions where lexical semantics and syntax are processed (for a review, see Bookheimer,
<xref ref-type="bibr" rid="B5">2002</xref>
; Friederici,
<xref ref-type="bibr" rid="B16">2002</xref>
,
<xref ref-type="bibr" rid="B17">2011</xref>
). Second, the neural activation of prosody in speech and melody in song may be driven by its acoustic, pitch-related properties that are known to evoke a relative predominance of right hemispheric involvement (Zatorre,
<xref ref-type="bibr" rid="B77">2001</xref>
; Jamison et al.,
<xref ref-type="bibr" rid="B33">2006</xref>
; Obleser et al.,
<xref ref-type="bibr" rid="B48">2008</xref>
). Furthermore, we expected differences with respect to gliding and discrete pitches to be reflected in particular brain signatures.</p>
</sec>
<sec sec-type="materials|methods" id="s1">
<title>Materials and Methods</title>
<sec>
<title>Participants</title>
<p>Twenty-one healthy German native speakers (14 male, mean age 24.2 years, SD: 2.4 years) participated in the study. None of the participants were professional musicians, nor had learned to play a musical instrument for more than 2 years. All control participants reported to have normal hearing. Informed consent according to the Declaration of Helsinki was obtained from each participant prior to the experiment which was approved by the local Ethical Committee.</p>
</sec>
<sec>
<title>Materials</title>
<p>The paradigm consisted of 6 conditions (with 36 stimuli each) arranged in a subtractive hierarchy: spoken (SPKwpr) and sung sentences (SNGwpr) containing words, pitch patterns, and rhythm; hummed speech prosody (SPKpr) and song melody (SNGpr) containing only pitch patterns and rhythm, as well as the speech or musical rhythm (SPKr and SNGr; see Figure
<xref ref-type="fig" rid="F1">1</xref>
A; sample stimuli will be provided on request).</p>
<p>The sentences for the “wpr” stimuli were six different statements, with a constant number of twelve syllables across all conditions. The actual text content (lyrics) was carefully selected in order to be (a) semantically plausible in both, song and propositional speech (it is obviously not plausible to sing about taking the trash out) and (b) both the regular and irregular stress patterns were rhythmically compatible with the underlying melody (a stressed or prominent point in the melody never coincided with an unstressed word or syllable; see Figure
<xref ref-type="fig" rid="F1">1</xref>
B).</p>
<p>The six melodies for the sung (SNG) stimuli were composed according to the rules of western tonal music, in related major and minor keys, duple and triple meters, and with and without upbeat depending on the sentences. The lyric/tone relation was mostly syllabic. The melodies had to be highly distinguishable in key, rhythm and meter to make the task feasible (see below).</p>
<p>Melodies and lyrics were both unfamiliar to avoid activations due to long-term memory processes, automatic linguistic (lyric) priming, and task cueing.</p>
<p>Spoken, sung (wpr) and hummed (pr and r) stimuli were recorded by a female trained voice who was instructed to avoid the Singer’s Formant and ornaments like vibrato in the sung stimuli, to speak the spoken stimuli with emotionally neutral prosody and not to stress them rhythmically in order to keep them as natural as possible.</p>
<p>For the rhythm (r) conditions, a hummed tone (G3) was recorded and cut to 170 ms with 20 ms fade in and out. Sequences of hummed tones were created by setting the tone onset on the vowel onsets of each syllable according to the original sung and spoken material using Adobe Audition 3 (Adobe Systems). To control the hummed stimuli (pr and r) to be exactly equal in time and pitch as the spoken and sung sentences (wpr), they were adjusted using Celemony Melodyne Studio X (Celemony Software). All stimuli were cut to 3700 ms, normalized and compressed using Adobe Audition 3 (Adobe Systems).</p>
</sec>
<sec>
<title>Procedure</title>
<p>Across the experiment, each of the 36 stimuli was presented 6 times in a pseudo-random order (see below), interleaved with 20 baseline conditions (no sound played) and 36 task trials (requiring a response), resulting in 272 stimulus presentations in total. In an effort to avoid adaptation effects, exactly the same stimuli, stimuli with the same melody/text, or stimuli from the same level (wpr, pr, r) were not allowed to follow each other in the pseudo-randomized stimulus list.</p>
<p>The duration of the experiment was 34 min. For stimulus presentation and recording of behavioral responses, the software Presentation 13.0 (Neurobehavioral Systems, Inc., San Francisco, CA, USA) was used.</p>
<p>The participants were instructed to passively listen to the sounds, without being informed about the kind of stimuli, like song or speech, melody, or rhythm. To assure the participants’ attention, 36 task trials required a same/different judgment with the stimulus of the preceding trial. The stimulus of the task trial (e.g., SNGwpr) was always taken from a different hierarchical level than the preceding stimulus (e.g., SNGr) and participants were required to indicate via button press whether the two stimuli were derived from the same original sentence or song. Prior to the experiment, participants received a short training to assure quick and accurate responses.</p>
<p>The timeline of a single passive listening trial (for sounds and silence) is depicted in Figure
<xref ref-type="fig" rid="F1">1</xref>
C: The duration of a passive listening trial was 7500 ms, during which the presentation of the stimulus (3700 ms; prompted by “+”) with a jittered onset delay of 0, 500, 1000, 1500, or 2000 ms was followed either by “…” or “!” shown for the remaining trial duration between 1800 and 3800 ms. The three dots (“…”) indicated that no task would follow. The exclamation mark (“!”) informed the listeners that instead, a task trial would follow, i.e., that they had to compare the next stimulus with the stimulus they had just heard.</p>
<p>The timeline of a task trial was analogous to a passive listening trial except for the last prompt, a “?” indicating the time to respond via button press (see Figure
<xref ref-type="fig" rid="F1">1</xref>
C). Trials were presented in a fast event-related design. Task trials did not enter data analysis.</p>
</sec>
<sec>
<title>Scanning</title>
<p>Functional magnetic resonance imaging was performed on a 3T Siemens Trio Tim scanner (Erlangen, Germany) at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig. In an anatomical T1-weighted 2D-image (TR 1300 ms, TE 7.4 ms, flip angle 90°) 36 transversal slices were acquired. During the following functional scan one series of 816 BOLD images was continuously acquired using a gradient echo-planar imaging sequence (TR 2500 ms, TE 30 ms, flip angle 90°, matrix 64 × 64). 36 interleaved axial slices (3 mm × 3 mm × 3 mm voxel size, 1 mm interslice gap) were collected to cover the whole brain and the cerebellum. We made sure that participants were well able to hear the stimuli in the scanner.</p>
</sec>
<sec>
<title>Data analysis</title>
<sec>
<title>Univariate analysis</title>
<p>Functional magnetic resonance imaging data were analyzed using SPM8 (Welcome Department of Imaging Neuroscience). Images were realigned, unwarped using a fieldmap scan, spatially normalized into the MNI stereotactic space, and smoothed using a 6 mm FWHM Gaussian kernel. Low-frequency drifts were removed using a temporal high-pass filter with a cut-off of 128 s.</p>
<p>A general linear model using six regressors of interest (one for each of the six conditions) was estimated in each participant. Regressors were modeled using a boxcar function convolved with a hemodynamic response function to create predictor variables for analysis.</p>
<p>The no-stimulus (silent) trials served as an implicit baseline. Contrasts of all six conditions against the baseline were then submitted to a second-level within-subject analysis of variance. Specific contrasts were assessed to identify brain areas involved in word and pitch processing in spoken and sung stimuli in the human brain.</p>
<p>For word processing, the activations for the hummed stimuli were subtracted from the full spoken and sung stimuli separately for song and speech (SPKwpr–SPKpr and SNGwpr–SNGpr). To obtain differences in word processing between song and speech, these results were compared, i.e. [(SPKwpr–SPKpr)–(SNGwpr–SNGpr)] and [(SNGwpr–SNGpr)–(SPKwpr–SPKpr)].</p>
<p>To identify brain areas involved in the pure pitch processing in song and speech, the activation for the rhythm condition was subtracted from the pitch–rhythm condition (SPKpr–SPKr and SNGpr–SNGr) and compared, i.e. [(SPKpr–SPKr)–(SNGpr–SNGr)] and [(SNGpr–SNGr)–(SPKpr–SPKr)].</p>
<p>To identify brain areas that are commonly activated by the different parameters of speech and song, additional conjunction analyses were conducted for words, i.e. [(SPKwpr–SPKpr) ∩ (SNGwpr–SNGpr)] as well as pitch patterns, i.e. [(SPKpr–SPKr) ∩ (SNGpr–SNGr)] using the principle of the minimum statistic compared to the conjunction null (Nichols et al.,
<xref ref-type="bibr" rid="B46">2005</xref>
).</p>
</sec>
<sec>
<title>Multivariate pattern analysis</title>
<p>The MVPA was carried out using SPM8 (Welcome Department of Imaging Neuroscience) and PyMVPA 0.4 (Hanke et al.,
<xref ref-type="bibr" rid="B26">2009</xref>
). Images were motion corrected before a temporal high-pass filter with a cut-off of 128 s was applied to remove low-frequency drifts. At this point no spatial smoothing and no normalization into MNI stereotactic space were performed to preserve the fine spatial activity patterns. Next, a contrast of interest was chosen. These contrasts included the same as with the UVA. MVPA was performed using a linear support vector machine (libsvm C-SVC, Chih-Chung Chang, and Chih-Jen Lin). For every task trial of the conditions, one image was selected as input for MVPA. To accommodate hemodynamic response, an image 7 s after stimulus onset was acquired by linear interpolation of the fMRI time series. Data were divided into five subsets each containing seven images per condition to allow for cross validation. Each subset was independently
<italic>z</italic>
-scored relative to baseline condition. We used a searchlight approach (Kriegeskorte et al.,
<xref ref-type="bibr" rid="B42">2006</xref>
) with a radius of 6 mm to map brain regions which were differentially activated during both conditions of interest. This resulted in accuracy maps of the whole brain. The resulting images were spatially normalized into the MNI stereotactic space, and smoothed using a 6 mm FWHM Gaussian kernel.</p>
<p>Accuracy maps of all subjects were then submitted to a second-level group analysis comparing the mean accuracy for each voxel to chance level (50%) by means of one-sample
<italic>t</italic>
-tests.</p>
<p>In general, analyzing multivariate data is still a methodological quest, specifically regarding the best way of performing group statistics.
<italic>T</italic>
-tests on accuracy maps are common practice (Haxby et al.,
<xref ref-type="bibr" rid="B27">2001</xref>
; Tusche et al.,
<xref ref-type="bibr" rid="B69">2010</xref>
; Bogler et al.,
<xref ref-type="bibr" rid="B4">2011</xref>
; Kahnt et al.,
<xref ref-type="bibr" rid="B35">2011</xref>
; Bode et al.,
<xref ref-type="bibr" rid="B2">2012</xref>
) although accuracies are not necessarily normally distributed. Non-parametric tests and especially permutation tests have better theoretical justification, but remain computationally less feasible.</p>
<p>All reported group SPM statistics for the univariate and the multivariate analyses were thresholded at
<italic>p</italic>
(cluster-size corrected) <0.05 in combination with
<italic>p</italic>
(voxel-level uncorrected) <0.001. The extent of activation is indicated by the number of suprathreshold voxels per cluster.</p>
<p>Localization of brain areas was done with reference to the Juelich Histological Atlas, Harvard-Oxford (Sub)Cortical Structural Atlas and activity within the cerebellum was determined with reference to the atlas of Schmahmann et al. (
<xref ref-type="bibr" rid="B61">2000</xref>
).</p>
</sec>
<sec>
<title>Region of interest analysis</title>
<p>To test for the lateralization of effects and specify differences between song and speech in the inferior frontal gyrus (IFG) and intraparietal sulcus (IPS), regions of interest (ROIs) were defined. According to the main activation peaks found in the whole brain analysis, ROIs for left and right BA 47 were taken from the Brodmann Map using the template implemented in MRIcron
<xref ref-type="fn" rid="fn1">
<sup>1</sup>
</xref>
. ROIs for the left and right IPS (hIP3) were taken from the SPM-implemented anatomy toolbox (Eickhoff et al.,
<xref ref-type="bibr" rid="B13">2005</xref>
). Contrast values from the uni- (beta values) and multivariate (accuracy values) analyses were extracted for each participant in each ROI by means of MarsBar
<xref ref-type="fn" rid="fn2">
<sup>2</sup>
</xref>
. Within-subject analyses of variance (ANOVA) and paired-sample
<italic>t</italic>
-tests were performed for each ROI using PASW Statistics 18.0. Normal distribution of the accuracies was verified in all ROIs using Kolmogorov–Smirnov tests (
<italic>p</italic>
’s > 0.643).</p>
</sec>
</sec>
</sec>
<sec>
<title>Results</title>
<sec>
<title>Words in song and speech</title>
<sec>
<title>Univariate analysis</title>
<p>The contrasts of spoken words over prosodic pitch–rhythm patterns (SPKwpr–SPKpr) and sung words over musical pitch–rhythm patterns (SNGwpr–SNGpr) showed similar activated core regions (with more extended cluster activations for the sung stimuli) in the superior temporal sulcus (STG/STS) bilaterally and for the SNGwpr–SNGpr additionally in left medial geniculate body (see Table
<xref ref-type="table" rid="T1">1</xref>
; Figure
<xref ref-type="fig" rid="F2">2</xref>
, top row for details).</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Brain areas involved in the processing of words in song and speech</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="9" align="left" rowspan="1">Words</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1">Region</th>
<th align="left" rowspan="1" colspan="1">BA</th>
<th align="left" rowspan="1" colspan="1">Hem</th>
<th align="left" rowspan="1" colspan="1">Cluster extent</th>
<th colspan="3" align="center" rowspan="1">MNI coordinates
<hr></hr>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>Z</italic>
value</th>
<th align="left" rowspan="1" colspan="1">Cluster
<italic>p</italic>
(cor)</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<italic>x</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>y</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>z</italic>
</th>
<th colspan="1" align="left" rowspan="1"></th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="9" align="left" style="background-color:Darkgray;" rowspan="1">
<bold>SPEECH</bold>
</td>
</tr>
<tr>
<td colspan="9" align="left" rowspan="1">
<bold>SPKwpr > SPKpr (UVA)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">1124</td>
<td align="left" rowspan="1" colspan="1">−36</td>
<td align="left" rowspan="1" colspan="1">−31</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">757</td>
<td align="left" rowspan="1" colspan="1">42</td>
<td align="left" rowspan="1" colspan="1">−25</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td colspan="9" align="left" rowspan="1">
<bold>SPKwpr vs. SPKpr (MVPA)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">3624</td>
<td align="left" rowspan="1" colspan="1">−66</td>
<td align="left" rowspan="1" colspan="1">−16</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">7.65</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">−45</td>
<td align="left" rowspan="1" colspan="1">−4</td>
<td align="left" rowspan="1" colspan="1">49</td>
<td align="left" rowspan="1" colspan="1">4.66</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Cerebellum crus I</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">158</td>
<td align="left" rowspan="1" colspan="1">−45</td>
<td align="left" rowspan="1" colspan="1">−67</td>
<td align="left" rowspan="1" colspan="1">−20</td>
<td align="left" rowspan="1" colspan="1">5.71</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Cerebellum VI lobule</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">−27</td>
<td align="left" rowspan="1" colspan="1">−58</td>
<td align="left" rowspan="1" colspan="1">−23</td>
<td align="left" rowspan="1" colspan="1">4.33</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG</td>
<td align="left" rowspan="1" colspan="1">47</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">146</td>
<td align="left" rowspan="1" colspan="1">−42</td>
<td align="left" rowspan="1" colspan="1">26</td>
<td align="left" rowspan="1" colspan="1">−11</td>
<td align="left" rowspan="1" colspan="1">4.18</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Visual cortex V1</td>
<td align="left" rowspan="1" colspan="1">17</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">87</td>
<td align="left" rowspan="1" colspan="1">−15</td>
<td align="left" rowspan="1" colspan="1">−109</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">4.13</td>
<td align="left" rowspan="1" colspan="1">0.010</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">3581</td>
<td align="left" rowspan="1" colspan="1">66</td>
<td align="left" rowspan="1" colspan="1">−7</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">7.37</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Primary motor cortex</td>
<td align="left" rowspan="1" colspan="1">4a</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">54</td>
<td align="left" rowspan="1" colspan="1">−7</td>
<td align="left" rowspan="1" colspan="1">43</td>
<td align="left" rowspan="1" colspan="1">5.45</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Supplementary motor area</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">352</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">61</td>
<td align="left" rowspan="1" colspan="1">4.95</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Superior parietal lobule</td>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">115</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">−70</td>
<td align="left" rowspan="1" colspan="1">25</td>
<td align="left" rowspan="1" colspan="1">3.94</td>
<td align="left" rowspan="1" colspan="1">0.002</td>
</tr>
<tr>
<td colspan="9" align="left" style="background-color:Darkgray;" rowspan="1">
<bold>SONG</bold>
</td>
</tr>
<tr>
<td colspan="9" align="left" rowspan="1">
<bold>SNGwpr > SNGpr (UVA)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">1486</td>
<td align="left" rowspan="1" colspan="1">−60</td>
<td align="left" rowspan="1" colspan="1">−10</td>
<td align="left" rowspan="1" colspan="1">−2</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Thalamus (medial geniculate body)</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">194</td>
<td align="left" rowspan="1" colspan="1">−12</td>
<td align="left" rowspan="1" colspan="1">−28</td>
<td align="left" rowspan="1" colspan="1">−2</td>
<td align="left" rowspan="1" colspan="1">5.62</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">1112</td>
<td align="left" rowspan="1" colspan="1">42</td>
<td align="left" rowspan="1" colspan="1">−25</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td colspan="9" align="left" rowspan="1">
<bold>SNGwpr vs. SNGpr (MVPA)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">2663</td>
<td align="left" rowspan="1" colspan="1">−57</td>
<td align="left" rowspan="1" colspan="1">−13</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">7.67</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">−45</td>
<td align="left" rowspan="1" colspan="1">−10</td>
<td align="left" rowspan="1" colspan="1">49</td>
<td align="left" rowspan="1" colspan="1">4.34</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">2486</td>
<td align="left" rowspan="1" colspan="1">51</td>
<td align="left" rowspan="1" colspan="1">−7</td>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">7.82</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG</td>
<td align="left" rowspan="1" colspan="1">47</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">45</td>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="left" rowspan="1" colspan="1">−11</td>
<td align="left" rowspan="1" colspan="1">4.37</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Frontal operculum</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">36</td>
<td align="left" rowspan="1" colspan="1">23</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">4.00</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Primary somatosensory cortex</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">164</td>
<td align="left" rowspan="1" colspan="1">57</td>
<td align="left" rowspan="1" colspan="1">−7</td>
<td align="left" rowspan="1" colspan="1">40</td>
<td align="left" rowspan="1" colspan="1">5.47</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td colspan="9" align="left" style="background-color:Darkgray;" rowspan="1">
<bold>CONJUNCTION SPKwpr–SPKpr ∩ SNGwpr–SNGpr</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS (planum temporale)</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">1065</td>
<td align="left" rowspan="1" colspan="1">−36</td>
<td align="left" rowspan="1" colspan="1">−31</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS (planum temporale)</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">700</td>
<td align="left" rowspan="1" colspan="1">42</td>
<td align="left" rowspan="1" colspan="1">−25</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>All
<italic>p</italic>
(cluster-size corrected) <0.05 in combination with
<italic>p</italic>
(uncorrected) <0.001</italic>
.</p>
</table-wrap-foot>
</table-wrap>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Brain regions that distinguish between words and pitch–rhythm patterns in song and speech [first vs. second level: SPKwpr vs. SPKpr and SNGwpr vs. SNGpr;
<italic>p</italic>
(cluster-size corrected) <0.05 in combination with
<italic>p</italic>
(uncorrected) <0.001]</bold>
. Bargraphs depict beta values (UVA) and accuracy values (MVPA) of the shown contrasts extracted from left and right BA 47. Significant differences between conditions are indicated by an asterisk (*
<italic>p</italic>
 < 0.05). Colour scales on the right indicate
<italic>t</italic>
-values for each row. IFG, inferior frontal gyrus.</p>
</caption>
<graphic xlink:href="fpsyg-03-00076-g002"></graphic>
</fig>
<p>The overlap of these activations was nearly complete as evidenced by a conjunction analysis and no significant differences in the direct comparison of both contrasts, i.e. [(SPKwpr–SPKpr)–(SNGwpr–SNGpr)] and [(SNGwpr–SNGpr)–(SPKwpr–SPKpr)].</p>
</sec>
<sec>
<title>Multivariate pattern analysis</title>
<p>The MVPA revealed brain regions that distinguish significantly between words and pitch–rhythm patterns for both song (SNGwpr vs. SNGpr) and speech (SPKwpr vs. SPKpr) in the STG/STS and premotor cortex bilaterally (extending into the motor and somatosensory cortex; see Table
<xref ref-type="table" rid="T1">1</xref>
for details). For speech, in the SPKwpr vs. SPKpr contrast, additional information patterns were found in the supplementary motor area (SMA), the cerebellum, the pars orbitalis of the left IFG (BA 47), the right superior parietal lobule (BA 7), and the visual cortex (BA 17). For song, the SNGwpr vs. SNGpr contrast showed additional peaks in the pars orbitalis of the right IFG (BA 47) and the adjacent frontal operculum (see Figure
<xref ref-type="fig" rid="F2">2</xref>
, bottom row).</p>
<p>Interestingly, the results were suggestive of a different lateralization of IFG involvement in spoken and sung words. To further explore this observation, accuracy values were extracted from anatomically defined ROI in the left and right BA 47 (see
<xref ref-type="sec" rid="s1">Materials and Methods</xref>
) and subjected to an ANOVA for repeated measures with the factors hemisphere (left/right) and modality (speech/song). This analysis showed a significant interaction of hemisphere × modality [
<italic>F</italic>
(1,20) = 5.049,
<italic>p</italic>
 < 0.036], indicating that the left and right BA 47 were differentially involved in discriminating words from pitch in song and speech. Subsequent
<italic>t</italic>
-tests for paired samples revealed that in song, right BA 47 showed predominance over left BA 47 [
<italic>t</italic>
(20) = −2.485,
<italic>p</italic>
 < 0.022], whereas the nominally opposite lateralization in speech fell short of significance (
<italic>p</italic>
 > 0.05). Moreover, left BA 47 showed predominance for word-pitch discrimination in speech compared to song [
<italic>t</italic>
(20) = 2.453,
<italic>p</italic>
 < 0.023; see bar graphs in Figure
<xref ref-type="fig" rid="F2">2</xref>
].</p>
</sec>
</sec>
<sec>
<title>Pitch patterns in song and speech</title>
<sec>
<title>Univariate analysis</title>
<p>Activation for processing pitch information was revealed in the contrast of prosodic pitch–rhythm patterns vs. prosodic rhythm patterns (SPKpr–SPKr) for speech and in the contrast musical pitch–rhythm patterns vs. musical rhythm patterns (SNGpr–SNGr) for song (Table
<xref ref-type="table" rid="T2">2</xref>
; Figure
<xref ref-type="fig" rid="F3">3</xref>
, top row). Note that these contrasts allow for investigating pitch in song and speech corrected for differential rhythm patterns. Both showed activations in the STG/STS bilaterally and in the premotor cortex bilaterally. For speech, the prosodic pitch patterns (SPKpr–SPKr) showed further activations in the pars orbitalis of the left IFG (BA 47) and the SMA.</p>
<table-wrap id="T2" position="float">
<label>Table 2</label>
<caption>
<p>
<bold>Brain areas involved in the processing of pitch patterns in song and speech</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="9" align="left" rowspan="1">Pitch</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1">Region</th>
<th align="left" rowspan="1" colspan="1">BA</th>
<th align="left" rowspan="1" colspan="1">Hem</th>
<th align="left" rowspan="1" colspan="1">Cluster extent</th>
<th colspan="3" align="center" rowspan="1">MNI coordinates
<hr></hr>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>Z</italic>
value</th>
<th align="left" rowspan="1" colspan="1">Cluster
<italic>p</italic>
(cor)</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<italic>x</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>y</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>z</italic>
</th>
<th colspan="1" align="left" rowspan="1"></th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="9" align="left" style="background-color:Darkgray;" rowspan="1">
<bold>SPEECH</bold>
</td>
</tr>
<tr>
<td colspan="9" align="left" rowspan="1">
<bold>SPKpr > SPKr (UVA)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">802</td>
<td align="left" rowspan="1" colspan="1">−54</td>
<td align="left" rowspan="1" colspan="1">−4</td>
<td align="left" rowspan="1" colspan="1">−2</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">99</td>
<td align="left" rowspan="1" colspan="1">−54</td>
<td align="left" rowspan="1" colspan="1">−7</td>
<td align="left" rowspan="1" colspan="1">49</td>
<td align="left" rowspan="1" colspan="1">5.23</td>
<td align="left" rowspan="1" colspan="1">0.007</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG</td>
<td align="left" rowspan="1" colspan="1">47</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">86</td>
<td align="left" rowspan="1" colspan="1">−36</td>
<td align="left" rowspan="1" colspan="1">32</td>
<td align="left" rowspan="1" colspan="1">−5</td>
<td align="left" rowspan="1" colspan="1">4,74</td>
<td align="left" rowspan="1" colspan="1">0.014</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">993</td>
<td align="left" rowspan="1" colspan="1">60</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">−5</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">101</td>
<td align="left" rowspan="1" colspan="1">54</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">43</td>
<td align="left" rowspan="1" colspan="1">7.03</td>
<td align="left" rowspan="1" colspan="1">0.007</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Supplementary motor area</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">60</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">64</td>
<td align="left" rowspan="1" colspan="1">4.45</td>
<td align="left" rowspan="1" colspan="1">0.054</td>
</tr>
<tr>
<td colspan="9" align="left" rowspan="1">
<bold>SPKpr vs. SPKr (MVPA)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">1664</td>
<td align="left" rowspan="1" colspan="1">−57</td>
<td align="left" rowspan="1" colspan="1">−10</td>
<td align="left" rowspan="1" colspan="1">−2</td>
<td align="left" rowspan="1" colspan="1">6.75</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG</td>
<td align="left" rowspan="1" colspan="1">45</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">−48</td>
<td align="left" rowspan="1" colspan="1">23</td>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">4.06</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Primary somatosensory cortex</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">181</td>
<td align="left" rowspan="1" colspan="1">−51</td>
<td align="left" rowspan="1" colspan="1">−19</td>
<td align="left" rowspan="1" colspan="1">46</td>
<td align="left" rowspan="1" colspan="1">4.82</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">1512</td>
<td align="left" rowspan="1" colspan="1">63</td>
<td align="left" rowspan="1" colspan="1">−19</td>
<td align="left" rowspan="1" colspan="1">−5</td>
<td align="left" rowspan="1" colspan="1">6.93</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">152</td>
<td align="left" rowspan="1" colspan="1">54</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">46</td>
<td align="left" rowspan="1" colspan="1">4.99</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Supplementary motor area</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">75</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">67</td>
<td align="left" rowspan="1" colspan="1">4.20</td>
<td align="left" rowspan="1" colspan="1">0.022</td>
</tr>
<tr>
<td colspan="9" align="left" style="background-color:Darkgray;" rowspan="1">
<bold>SONG</bold>
</td>
</tr>
<tr>
<td colspan="9" align="left" rowspan="1">
<bold>SNGpr > SNGr (UVA)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">866</td>
<td align="left" rowspan="1" colspan="1">−54</td>
<td align="left" rowspan="1" colspan="1">−4</td>
<td align="left" rowspan="1" colspan="1">−2</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Anterior cingulate cortex</td>
<td align="left" rowspan="1" colspan="1">24</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">171</td>
<td align="left" rowspan="1" colspan="1">−3</td>
<td align="left" rowspan="1" colspan="1">−7</td>
<td align="left" rowspan="1" colspan="1">43</td>
<td align="left" rowspan="1" colspan="1">5.19</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">110</td>
<td align="left" rowspan="1" colspan="1">−51</td>
<td align="left" rowspan="1" colspan="1">−7</td>
<td align="left" rowspan="1" colspan="1">52</td>
<td align="left" rowspan="1" colspan="1">4.64</td>
<td align="left" rowspan="1" colspan="1">0.004</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Lat. occip. cortex (sup. division)</td>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">105</td>
<td align="left" rowspan="1" colspan="1">−27</td>
<td align="left" rowspan="1" colspan="1">−82</td>
<td align="left" rowspan="1" colspan="1">19</td>
<td align="left" rowspan="1" colspan="1">4.16</td>
<td align="left" rowspan="1" colspan="1">0.006</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Parietal lobe. precuneus. WM</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">88</td>
<td align="left" rowspan="1" colspan="1">−21</td>
<td align="left" rowspan="1" colspan="1">−43</td>
<td align="left" rowspan="1" colspan="1">37</td>
<td align="left" rowspan="1" colspan="1">4.54</td>
<td align="left" rowspan="1" colspan="1">0.013</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Anterior intraparietal sulcus</td>
<td align="left" rowspan="1" colspan="1">(hIP3)</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">−24</td>
<td align="left" rowspan="1" colspan="1">−61</td>
<td align="left" rowspan="1" colspan="1">49</td>
<td align="left" rowspan="1" colspan="1">3.40</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Cerebellum VI lobule</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">1451</td>
<td align="left" rowspan="1" colspan="1">−27</td>
<td align="left" rowspan="1" colspan="1">−61</td>
<td align="left" rowspan="1" colspan="1">−26</td>
<td align="left" rowspan="1" colspan="1">6.50</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Cerebellum VI lobule</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="left" rowspan="1" colspan="1">−70</td>
<td align="left" rowspan="1" colspan="1">−6</td>
<td align="left" rowspan="1" colspan="1">5.53</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">1690</td>
<td align="left" rowspan="1" colspan="1">54</td>
<td align="left" rowspan="1" colspan="1">−7</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Caudate nucleus</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">4.75</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG</td>
<td align="left" rowspan="1" colspan="1">47</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">48</td>
<td align="left" rowspan="1" colspan="1">26</td>
<td align="left" rowspan="1" colspan="1">−5</td>
<td align="left" rowspan="1" colspan="1">4.09</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Visual cortex V1</td>
<td align="left" rowspan="1" colspan="1">17</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">113</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">−88</td>
<td align="left" rowspan="1" colspan="1">−2</td>
<td align="left" rowspan="1" colspan="1">3.86</td>
<td align="left" rowspan="1" colspan="1">0.004</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">90</td>
<td align="left" rowspan="1" colspan="1">54</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">43</td>
<td align="left" rowspan="1" colspan="1">6.55</td>
<td align="left" rowspan="1" colspan="1">0.011</td>
</tr>
<tr>
<td colspan="9" align="left" rowspan="1">
<bold>SNGpr vs. SNGr (MVPA)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">2223</td>
<td align="left" rowspan="1" colspan="1">−57</td>
<td align="left" rowspan="1" colspan="1">−4</td>
<td align="left" rowspan="1" colspan="1">−5</td>
<td align="left" rowspan="1" colspan="1">7.26</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Primary motor cortex</td>
<td align="left" rowspan="1" colspan="1">4a</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">219</td>
<td align="left" rowspan="1" colspan="1">−48</td>
<td align="left" rowspan="1" colspan="1">−10</td>
<td align="left" rowspan="1" colspan="1">46</td>
<td align="left" rowspan="1" colspan="1">4.62</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Anterior cingulate cortex</td>
<td align="left" rowspan="1" colspan="1">24</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">152</td>
<td align="left" rowspan="1" colspan="1">−3</td>
<td align="left" rowspan="1" colspan="1">−10</td>
<td align="left" rowspan="1" colspan="1">40</td>
<td align="left" rowspan="1" colspan="1">4.69</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">2622</td>
<td align="left" rowspan="1" colspan="1">57</td>
<td align="left" rowspan="1" colspan="1">−4</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">6.93</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">54</td>
<td align="left" rowspan="1" colspan="1">−4</td>
<td align="left" rowspan="1" colspan="1">40</td>
<td align="left" rowspan="1" colspan="1">4.37</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Anterior intraparietal sulcus</td>
<td align="left" rowspan="1" colspan="1">(hIP3)</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">129</td>
<td align="left" rowspan="1" colspan="1">33</td>
<td align="left" rowspan="1" colspan="1">−49</td>
<td align="left" rowspan="1" colspan="1">52</td>
<td align="left" rowspan="1" colspan="1">4.27</td>
<td align="left" rowspan="1" colspan="1">0.001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Supplementary motor area</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">82</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">61</td>
<td align="left" rowspan="1" colspan="1">3.94</td>
<td align="left" rowspan="1" colspan="1">0.013</td>
</tr>
<tr>
<td colspan="9" align="left" style="background-color:Darkgray;" rowspan="1">
<bold>CONJUNCTION SPKpr–SPKr ∩ SNGpr–SNGr</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS (planum polare)</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">571</td>
<td align="left" rowspan="1" colspan="1">−54</td>
<td align="left" rowspan="1" colspan="1">−4</td>
<td align="left" rowspan="1" colspan="1">−2</td>
<td align="left" rowspan="1" colspan="1">Inf</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex (BA 6)</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">76</td>
<td align="left" rowspan="1" colspan="1">−51</td>
<td align="left" rowspan="1" colspan="1">−7</td>
<td align="left" rowspan="1" colspan="1">52</td>
<td align="left" rowspan="1" colspan="1">7.66</td>
<td align="left" rowspan="1" colspan="1">0.023</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG/STS (planum polare)</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">713</td>
<td align="left" rowspan="1" colspan="1">60</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">−2</td>
<td align="left" rowspan="1" colspan="1">7.66</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex (BA 6)</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">77</td>
<td align="left" rowspan="1" colspan="1">54</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">43</td>
<td align="left" rowspan="1" colspan="1">6.55</td>
<td align="left" rowspan="1" colspan="1">0.022</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>All
<italic>p</italic>
(cluster-size corrected) <0.05 in combination with
<italic>p</italic>
(uncorrected) <0.001</italic>
.</p>
</table-wrap-foot>
</table-wrap>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Brain regions that distinguish between pitch–rhythm patterns and rhythm in song and speech [second vs. third level: SPKpr vs. SPKr and SNGpr vs. SNGr;
<italic>p</italic>
(cluster-size corrected) <0.05 in combination with
<italic>p</italic>
(uncorrected) <0.001]</bold>
. Bargraphs depict beta values (UVA) and accuracy values (MVPA) of the shown contrasts extracted from left and right BA 47 and the IPS. Significant results of the ROI analysis are indicated by an asterisk (*
<italic>p</italic>
 < 0.05). Colour scales on the right indicate
<italic>t</italic>
-values for each row. IFG, inferior frontal gyrus; IPS, intraparietal sulcus.</p>
</caption>
<graphic xlink:href="fpsyg-03-00076-g003"></graphic>
</fig>
<p>The musical pitch patterns (SNGpr–SNGr) showed further activations in the pars orbitalis of the right IFG (BA 47), the cerebellum bilaterally, the left anterior cingulate cortex (ACC), the left lateral occipital cortex, the midline of the visual cortex, the right caudate nucleus, as well as a cluster in the parietal lobe with peaks in the left precuneus and the anterior IPS (see Table
<xref ref-type="table" rid="T2">2</xref>
; Figure
<xref ref-type="fig" rid="F3">3</xref>
, top row).</p>
<p>A conjunction analysis of both contrasts showed shared bilateral activations in the STG/STS (planum polare) and in the premotor cortex bilaterally. Despite the differential involvement of IFG, cerebellum and IPS listed above, these differences between pitch-related processes in song and speech fell short of statistical significance in the whole brain analysis.</p>
<p>Again, the results were suggestive of a differential lateralization of IFG activity during pitch processing in speech and song. Therefore, an ANOVA with the repeated measures factors hemisphere (left/right) and modality (speech/song) as well as
<italic>t</italic>
-tests for paired samples (comparing the hemispheres within each modality) were conducted on the beta values of the contrast images extracted from ROIs in the left and right BA 47 (see
<xref ref-type="sec" rid="s1">Materials and Methods</xref>
). This analysis showed a significant interaction of hemisphere × modality [
<italic>F</italic>
(1,20) = 5.185,
<italic>p</italic>
 < 0.034], indicating that the left and right BA 47 were differentially involved in the processing of pitch patterns in speech and song. Subsequent
<italic>t</italic>
-tests showed that while left BA 47 was more strongly involved during spoken pitch processing than right BA 47 [
<italic>t</italic>
(20) = 2.837,
<italic>p</italic>
 < 0.01], no such lateralization was found for sung pitch [
<italic>t</italic>
(20),
<italic>p</italic>
 > 0.9]. Furthermore, involvement of right BA 47 was marginally stronger during pitch processing in song compared to speech [
<italic>t</italic>
(20) = −2.032,
<italic>p</italic>
 < 0.056], whereas no such difference was found for left BA 47.</p>
<p>Considering the growing evidence that the IPS is involved in the processing of pitch in music (Zatorre et al.,
<xref ref-type="bibr" rid="B80">1994</xref>
,
<xref ref-type="bibr" rid="B76">2009</xref>
; Foster and Zatorre,
<xref ref-type="bibr" rid="B15">2010</xref>
; Klein and Zatorre,
<xref ref-type="bibr" rid="B38">2011</xref>
) and as the IPS was only activated in the sung pitch contrast (SNGpr–SNGr) and not in the spoken pitch contrast (SPKpr–SPKr), an additional ROI analysis was performed to further explore differences in sung pitch and spoken pitch. Therefore, contrast values were extracted from anatomically defined ROIs in the left and right IPS (see
<xref ref-type="sec" rid="s1">Materials and Methods</xref>
) and subjected to an ANOVA for repeated measures with the factors hemisphere (left/right) and modality (speech/song). This analysis showed a significant main effect of modality [
<italic>F</italic>
(1,20) = 5.565,
<italic>p</italic>
 < 0.029] and no significant interaction of hemisphere × modality [
<italic>F</italic>
(1,20) = 1.421,
<italic>p</italic>
 > 0.3], indicating that both, the left and the right IPS, were more strongly activated by sung than spoken pitch patterns.</p>
</sec>
<sec>
<title>Multivariate pattern analysis</title>
<p>The MVPA revealed brain regions that distinguish between pitch–rhythm patterns and rhythm patterns for both song and speech in the STG/STS bilaterally, bilaterally in the premotor cortex (extending into motor and somatosensory cortex) and SMA. For the SPKpr vs. SPKr comparison a peak in the left IFG (BA 45) was found (see Figure
<xref ref-type="fig" rid="F3">3</xref>
, bottom row). For SNGpr vs. SNGr additional clusters were found in the left anterior cingulate gyrus and left anterior IPS. Converging with the UVA results, the ROI analysis on the extracted contrast values revealed that the bilateral IPS was more involved in processing pitch relations in song than in speech, as shown by a significant main effect of modality [
<italic>F</italic>
(1,20) = 7.471,
<italic>p</italic>
 < 0.013] and no significant interaction of hemisphere × modality [
<italic>F</italic>
(1,20) = 0.456,
<italic>p</italic>
 > 0.5].</p>
</sec>
</sec>
<sec>
<title>Word and pitch processing in vocal stimuli</title>
<p>To further explore whether there are brain regions that show stronger activation for words than for pitch patterns and vice versa, irrespective of whether presented as song or speech, two additional contrasts were defined (wpr–pr and pr–r) and compared (see Table
<xref ref-type="table" rid="T3">3</xref>
; Figure
<xref ref-type="fig" rid="F4">4</xref>
). The comparison of word and pitch processing [(wpr–pr)–(pr–r)] showed a stronger activation for words in the planum temporale (PT) bilaterally, and the left insula. The reverse comparison [(pr–r)–(wpr–pr)] showed activations for pitch in the planum polare of the STG bilaterally, the pars orbitalis of the right IFG (BA 47), the right premotor cortex, right SMA, left cerebellum, the left caudate and putamen, and the left parietal operculum.</p>
<table-wrap id="T3" position="float">
<label>Table 3</label>
<caption>
<p>
<bold>Brain areas involved in the processing of words and pitch in vocal stimuli</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="9" align="left" rowspan="1">Words and pitch</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1">Region</th>
<th align="left" rowspan="1" colspan="1">BA</th>
<th align="left" rowspan="1" colspan="1">hem</th>
<th align="left" rowspan="1" colspan="1">Cluster extent</th>
<th colspan="3" align="center" rowspan="1">MNI coordinates
<hr></hr>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>Z</italic>
value</th>
<th align="left" rowspan="1" colspan="1">Cluster
<italic>p</italic>
(cor)</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<italic>x</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>y</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>z</italic>
</th>
<th colspan="1" align="left" rowspan="1"></th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="9" align="left" style="background-color:Darkgray;" rowspan="1">
<bold>WORDS > PITCH</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG (planum temporale)</td>
<td align="left" rowspan="1" colspan="1">41</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">435</td>
<td align="left" rowspan="1" colspan="1">−39</td>
<td align="left" rowspan="1" colspan="1">−34</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">6.92</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Insula Ig2</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">−39</td>
<td align="left" rowspan="1" colspan="1">−22</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">6.49</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG (planum temporale)</td>
<td align="left" rowspan="1" colspan="1">41</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">−45</td>
<td align="left" rowspan="1" colspan="1">−31</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">5.54</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG (planum temporale)</td>
<td align="left" rowspan="1" colspan="1">41</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">148</td>
<td align="left" rowspan="1" colspan="1">42</td>
<td align="left" rowspan="1" colspan="1">−25</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">6.98</td>
<td align="left" rowspan="1" colspan="1">0.001</td>
</tr>
<tr>
<td colspan="9" align="left" style="background-color:Darkgray;" rowspan="1">
<bold>PITCH > WORDS</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG (planum polare)</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">L</td>
<td align="left" rowspan="1" colspan="1">84</td>
<td align="left" rowspan="1" colspan="1">−51</td>
<td align="left" rowspan="1" colspan="1">−4</td>
<td align="left" rowspan="1" colspan="1">−2</td>
<td align="left" rowspan="1" colspan="1">5.24</td>
<td align="left" rowspan="1" colspan="1">0.015</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Parietal operculum OP4</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">−51</td>
<td align="left" rowspan="1" colspan="1">−1</td>
<td align="left" rowspan="1" colspan="1">13</td>
<td align="left" rowspan="1" colspan="1">3.47</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Cerebellum VI lobule</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">95</td>
<td align="left" rowspan="1" colspan="1">−30</td>
<td align="left" rowspan="1" colspan="1">−61</td>
<td align="left" rowspan="1" colspan="1">−26</td>
<td align="left" rowspan="1" colspan="1">4.97</td>
<td align="left" rowspan="1" colspan="1">0.009</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Caudate nucleus</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">128</td>
<td align="left" rowspan="1" colspan="1">−18</td>
<td align="left" rowspan="1" colspan="1">−7</td>
<td align="left" rowspan="1" colspan="1">19</td>
<td align="left" rowspan="1" colspan="1">4.12</td>
<td align="left" rowspan="1" colspan="1">0.002</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Putamen</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">−21</td>
<td align="left" rowspan="1" colspan="1">−1</td>
<td align="left" rowspan="1" colspan="1">13</td>
<td align="left" rowspan="1" colspan="1">4.11</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Premotor cortex</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">R</td>
<td align="left" rowspan="1" colspan="1">83</td>
<td align="left" rowspan="1" colspan="1">54</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">43</td>
<td align="left" rowspan="1" colspan="1">6.56</td>
<td align="left" rowspan="1" colspan="1">0.016</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Supplementary motor area</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">237</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">64</td>
<td align="left" rowspan="1" colspan="1">6.07</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">STG (planum polare)</td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">368</td>
<td align="left" rowspan="1" colspan="1">51</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">−5</td>
<td align="left" rowspan="1" colspan="1">5.36</td>
<td align="left" rowspan="1" colspan="1">0.000</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IFG</td>
<td align="left" rowspan="1" colspan="1">47</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">48</td>
<td align="left" rowspan="1" colspan="1">26</td>
<td align="left" rowspan="1" colspan="1">−5</td>
<td align="left" rowspan="1" colspan="1">4.64</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>All
<italic>p</italic>
(cluster-size corrected) <0.05 in combination with
<italic>p</italic>
(uncorrected) <0.001</italic>
.</p>
</table-wrap-foot>
</table-wrap>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Comparison of word and pitch processing in vocal stimuli</bold>
. Words-pitch (red) [(wpr–pr)–(pr–r)], pitch-words (blue) [(pr–r)–(wpr–pr)][
<italic>p</italic>
(cluster-size corrected) <0.05 in combination with
<italic>p</italic>
(uncorrected) <0.001].</p>
</caption>
<graphic xlink:href="fpsyg-03-00076-g004"></graphic>
</fig>
</sec>
</sec>
<sec>
<title>Discussion</title>
<p>The goal of the present study was to clarify how the human brain responds to different parameters in song and speech, and to what extent the neural discrimination relies on phonological and vocalization differences in spoken and sung words and discrete and gliding pitches in speech prosody and song melody. Based on UVA and MVPA of the functional brain activity three main results were obtained: Firstly, song and speech recruited a largely overlapping bilateral temporo-frontal network in which the STG and the premotor cortex were found to code for differences between words and pitch independent of song and speech. Secondly, the left IFG coded for spoken words and showed dominance over the right IFG for pitch in speech, whereas an opposite lateralization was found for pitch in song. Thirdly, the IPS responded more strongly to discrete pitch relations in song compared to pitch in speech.</p>
<p>We will discuss the neuroanatomical findings and their functional significance in more detail below.</p>
<sec>
<title>Inferior frontal gyrus</title>
<p>The IFG was involved with a differential hemispheric preponderance depending on whether words or melodies were presented in song or speech. The results suggest that the left IFG shows relative predominance in differentiating words and melodies in speech (compared to song) whereas the right IFG (compared to the left) shows predominance indiscriminating words from melodies in song. (This effect was found in the MVPA only, demonstrating the higher sensitivity of MVPA to the differential fine-scale coding of information.) The left IFG involvement in speech most likely reflects the focused processing of segmental linguistic information, such as lexical semantics and syntax (for a review, see Bookheimer,
<xref ref-type="bibr" rid="B5">2002</xref>
; Friederici,
<xref ref-type="bibr" rid="B16">2002</xref>
) to decode the message of the heard sentence. The right IFG involvement in song might be due to the specific way sung words are vocalized – as for example characterized by a lengthening of vowels. The right hemisphere is known to process auditory information at broader time scales than the left hemisphere (Giraud et al.,
<xref ref-type="bibr" rid="B21">2004</xref>
; Poeppel et al.,
<xref ref-type="bibr" rid="B56">2004</xref>
; Boemio et al.,
<xref ref-type="bibr" rid="B3">2005</xref>
). This may be a possible reason why the right IFG showed specific sensitivity to sung words. Alternatively, due to the non-directional nature of MVPA results, the right frontal involvement may also reflect the predominant processing of pitch in song. Although our right IFG result stands in apparent contrast to the left IFG activations observed in an UVA for sung words over vocalize by Schön et al. (
<xref ref-type="bibr" rid="B63">2010</xref>
) this discrepancy may be due to the different analysis method and stimulus material employed. Single words when they are sung as in Schön et al. (
<xref ref-type="bibr" rid="B63">2010</xref>
) may draw more attention to segmental information (e.g., meaning) and thus lead to a stronger left-hemispheric involvement than sung sentences (as used in the present study).</p>
<p>The processing of prosodic pitch patterns involved the left IFG (more than the right IFG), whereas melodic pitch patterns activated the right IFG (more than prosodic pitch patterns). The right IFG activation in melody processing is in line with previous results in music (Zatorre et al.,
<xref ref-type="bibr" rid="B80">1994</xref>
; Koelsch and Siebel,
<xref ref-type="bibr" rid="B40">2005</xref>
; Schmithorst,
<xref ref-type="bibr" rid="B62">2005</xref>
; Tillmann et al.,
<xref ref-type="bibr" rid="B68">2006</xref>
). Furthermore, this result along with the overall stronger involvement of the right IFG in pitch compared to word processing (Figure
<xref ref-type="fig" rid="F4">4</xref>
), is in keeping with the preference of the right hemisphere for processing spectral (as opposed to temporal) stimulus properties (Zatorre and Belin,
<xref ref-type="bibr" rid="B78">2001</xref>
; Zatorre et al.,
<xref ref-type="bibr" rid="B79">2002</xref>
; Jamison et al.,
<xref ref-type="bibr" rid="B33">2006</xref>
; Obleser et al.,
<xref ref-type="bibr" rid="B48">2008</xref>
).</p>
<p>The left-hemispheric predominance for prosodic pitch is most likely driven by the language-relatedness of the stimuli, superseding the right hemispheric competence of processing spectral information. The lateralization of prosodic processing has been a matter of debate with evidence from functional neuroimaging for both, a left (Gandour et al.,
<xref ref-type="bibr" rid="B20">2000</xref>
,
<xref ref-type="bibr" rid="B18">2003</xref>
; Hsieh et al.,
<xref ref-type="bibr" rid="B31">2001</xref>
; Klein et al.,
<xref ref-type="bibr" rid="B39">2001</xref>
), or a right hemisphere predominance (Meyer et al.,
<xref ref-type="bibr" rid="B44">2002</xref>
,
<xref ref-type="bibr" rid="B45">2004</xref>
; Plante et al.,
<xref ref-type="bibr" rid="B55">2002</xref>
; Wildgruber et al.,
<xref ref-type="bibr" rid="B73">2002</xref>
; Gandour et al.,
<xref ref-type="bibr" rid="B18">2003</xref>
). Recent views suggest that the lateralization can be modulated by the function of pitch in language and task demands (Plante et al.,
<xref ref-type="bibr" rid="B55">2002</xref>
; Kotz et al.,
<xref ref-type="bibr" rid="B41">2003</xref>
; Gandour et al.,
<xref ref-type="bibr" rid="B19">2004</xref>
). For example, Gandour et al. (
<xref ref-type="bibr" rid="B19">2004</xref>
) found that pitch in tonal languages was processed in left-lateralized areas when associated with semantic meaning (in native tonal language speakers) and right-lateralized areas when analyzed by lower-level acoustic/auditory processes (in English speakers that were unaware of the semantic content).</p>
<p>Furthermore, Kotz et al. (
<xref ref-type="bibr" rid="B41">2003</xref>
) found that randomly switching between prosodic (i.e., filtered) and normal speech in an event-related paradigm led to an overall left-hemispheric predominance for processing emotional prosody, which might be due to the carry-over of a “speech mode” of auditory processing to filtered speech triggered by the normal speech trials. In line with these findings, our participants may have associated the prosodic pitch patterns with normal speech in order to do the task, leading to an involvement of language-related area in the left IFG.</p>
<p>On a more abstract level, the combined results on speech prosody and musical melody suggest that the lateralization of pitch patterns in the brain may be determined by their function (speech- or song-related) and not their form (being pitch modulations in both speech and song; Friederici,
<xref ref-type="bibr" rid="B17">2011</xref>
).</p>
</sec>
<sec>
<title>Intraparietal sulcus</title>
<p>The left and right IPS were found to play a significant role in processing musical pitch rather than prosodic pitch. The IPS has been discussed with respect to a number of functions. It is known to be specialized in spatial processing integrating visual, tactile, auditory, and/or motor processing (for a review, see Grefkes and Fink,
<xref ref-type="bibr" rid="B22">2005</xref>
). It also seems to be involved in non-spatial operations, such as manipulating working memory contents and maintaining or controlling attention (Husain and Nachev,
<xref ref-type="bibr" rid="B32">2007</xref>
).</p>
<p>Related to the present study, the role of the IPS in pitch processing has attracted increasing attention. In an early study, Zatorre et al. (
<xref ref-type="bibr" rid="B80">1994</xref>
) found a bilateral activation in the inferior parietal lobe for a pitch judgment task (pitch processing) and suggested that a recoding of pitch information might be taking place during the performance of that task. More recent studies extended this interpretation, claiming that the IPS would be involved in a more general processing of pitch intervals and the transformation of auditory information. This idea is supported by the findings of Zatorre et al. (
<xref ref-type="bibr" rid="B76">2009</xref>
) showing an IPS involvement in the mental reversal of imagined melodies, the encoding of relative pitch by comparing transposed with simple melodies (Foster and Zatorre,
<xref ref-type="bibr" rid="B15">2010</xref>
), as well as the categorical perception of major and minor chords (Klein and Zatorre,
<xref ref-type="bibr" rid="B38">2011</xref>
).</p>
<p>While these results suggest that the IPS involvement for pitch patterns in song reflects the processing of different interval types or relative pitch
<italic>per se</italic>
, it remains to be explained why no similar activation was found in speech (i.e., comparing prosody against its underlying rhythm). It could be argued that the IPS is particularly involved in the processing of discrete pitches and fixed intervals typical in song, and not when perceiving gliding pitches and continuous pitch shifts as in speech. Indeed, to the best of our knowledge, no study on prosodic processing has ever reported IPS activations, eventually highlighting the IPS as one brain area that discriminates between discrete and gliding pitch as a core difference between song and speech (Fitch,
<xref ref-type="bibr" rid="B14">2006</xref>
; Patel,
<xref ref-type="bibr" rid="B51">2008</xref>
). Further evidence for this hypothesis needs to be collected in future studies.</p>
</sec>
<sec>
<title>Superior temporal cortex</title>
<p>The temporal lobe exhibited significant overlap between the processing of song and speech, at all different stimulus levels. Interestingly, however, words and pitch (irrespective of whether presented as speech or song) showed a different activation pattern in the temporal lobe. Beyond the antero-lateral STG that was jointly activated by words and pitch, activation for words extended additionally ventrally and posteriorly relative to Heschl’s gyrus, and activation for pitch patterns spread medially and anteriorly.</p>
<p>These results are in line with processing streams for pitch described in the literature. For example, Patterson et al. (
<xref ref-type="bibr" rid="B52">2002</xref>
) described a hierarchy of pitch processing in the temporal lobe. As the processing of auditory sounds proceeded from no pitch (noise) via fixed pitch towards melody, the centre of activity moved antero-laterally away from primary auditory cortex, reflecting the representation of increasingly complex pitch patterns, such as the ones employed in the present study.</p>
<p>Likewise, posterior temporal brain areas, in particular the PT, have been specifically described in the fine-grained analysis of spectro-temporally complex stimuli (Griffiths and Warren,
<xref ref-type="bibr" rid="B23">2002</xref>
; Warren et al.,
<xref ref-type="bibr" rid="B70">2005</xref>
; Schönwiesner and Zatorre,
<xref ref-type="bibr" rid="B64">2008</xref>
; Samson et al.,
<xref ref-type="bibr" rid="B60">2011</xref>
) and phonological processing in human speech (Chang et al.,
<xref ref-type="bibr" rid="B12">2010</xref>
). Accordingly, the fact that the PT in our study (location confirmed according to Westbury et al.,
<xref ref-type="bibr" rid="B71">1999</xref>
) showed stronger activation in the contrast of words over pitch for both song and speech may be due to a greater spectro-temporal complexity of the “word”-stimulus (as grounded in, e.g., the fast changing variety of high-band formants in the speech sounds) than the hummed “pitch” stimulus.</p>
</sec>
<sec>
<title>(Pre)motor areas</title>
<p>A number of brain areas that are classically associated with motor control, i.e., BA 2, 4, 6, SMA, ACC, caudate nucleus and putamen consistently showed activation in our study. This is in line with previous work showing that premotor and motor areas are not only activated in vocal production, but also in passive perception (Callan et al.,
<xref ref-type="bibr" rid="B11">2006</xref>
; Saito et al.,
<xref ref-type="bibr" rid="B58">2006</xref>
; Sammler et al.,
<xref ref-type="bibr" rid="B59">2010</xref>
; Schön et al.,
<xref ref-type="bibr" rid="B63">2010</xref>
), the discrimination of acoustic stimuli (Zatorre et al.,
<xref ref-type="bibr" rid="B75">1992</xref>
; Brown and Martinez,
<xref ref-type="bibr" rid="B6">2007</xref>
), processes for sub-vocal rehearsal and low-level vocal motor control (ACC; Perry et al.,
<xref ref-type="bibr" rid="B53">1999</xref>
), vocal imagery (SMA; Halpern and Zatorre,
<xref ref-type="bibr" rid="B25">1999</xref>
), or more generally auditory-to-articulatory mapping (PMC; Hickok et al.,
<xref ref-type="bibr" rid="B30">2003</xref>
; Wilson et al.,
<xref ref-type="bibr" rid="B74">2004</xref>
; Brown et al.,
<xref ref-type="bibr" rid="B8">2008</xref>
; Kleber et al.,
<xref ref-type="bibr" rid="B37">2010</xref>
). Indeed, our participants reported that they had tried to speak or sing along with the stimuli in their head and, thus, most likely recruited a subset of the above mentioned processes.</p>
<p>In keeping with this, the precentral activation observed in the present study is close to the larynx-phonation area (LPA) identified by Brown et al. (
<xref ref-type="bibr" rid="B8">2008</xref>
) that is thought to mediate both vocalization and audition.</p>
</sec>
<sec>
<title>Other areas</title>
<sec>
<title>Cerebellum</title>
<p>We also found effects in the cerebellum, another area associated with motor control (for an overview, see Stoodley and Schmahmann,
<xref ref-type="bibr" rid="B66">2009</xref>
). Apart from that, the discrimination between spoken words and prosodic pitch patterns (left crus I/VI lobe) as well as musical pitch patterns and musical rhythm (bilaterally, widely distributed, peaks in VI lobule) in the cerebellum fits with its multiple roles in language task (bilateral lobe VI; Stoodley and Schmahmann,
<xref ref-type="bibr" rid="B66">2009</xref>
), sensory auditory processing (especially the left lateral crus I; Petacchi et al.,
<xref ref-type="bibr" rid="B54">2005</xref>
) and motor articulation and perception and the instantiation of internal models of vocal tract articulation (VI lobe; for an overview, see Callan et al.,
<xref ref-type="bibr" rid="B10">2007</xref>
).</p>
</sec>
<sec>
<title>Visual cortex/occipital lobe</title>
<p>Activations observed in the visual cortex (BA 17, 18) seemed to be connected with processing pitch or melodic information. Previous findings support this idea, as similar regions were activated during pitch processing (Zatorre et al.,
<xref ref-type="bibr" rid="B75">1992</xref>
), listening to melodies (Zatorre et al.,
<xref ref-type="bibr" rid="B80">1994</xref>
; Foster and Zatorre,
<xref ref-type="bibr" rid="B15">2010</xref>
), and singing production (Perry et al.,
<xref ref-type="bibr" rid="B53">1999</xref>
; Kleber et al.,
<xref ref-type="bibr" rid="B36">2007</xref>
). Note that visual prompts did not seem to be responsible, as in Perry et al. (
<xref ref-type="bibr" rid="B53">1999</xref>
) for example participants had their eyes closed, and in the current study participants followed the same visual prompts in all conditions. Following Perry et al. (
<xref ref-type="bibr" rid="B53">1999</xref>
) and Foster and Zatorre (
<xref ref-type="bibr" rid="B15">2010</xref>
), activation might be due to a mental visual imagery.</p>
</sec>
</sec>
</sec>
<sec>
<title>Conclusion</title>
<p>In summary, the subtractive hierarchy used in the study provided a further step in uncovering brain areas involved in the perception of song and speech. Apart from a considerable overlap of song- and speech-related brain areas, the IFG and IPS were identified as candidate structures involved in discriminating words and pitch patterns in song and speech. While the left IFG coded for spoken words and showed predominance over the right IFG in pitch processing in speech, the right IFG showed predominance over the left for pitch processing in song.</p>
<p>Furthermore, the IPS was qualified as a core area for the processing of musical (i.e., discrete) pitches and intervals as opposed to gliding pitch in speech.</p>
<p>Overall, the data show that subtle differences in stimulus characteristics between speech and song can be dissected and are reflected in differential brain activity, on top of a considerable overlap.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>We would like to thank the MPI fMRI staff for technical help for data acquisition, Karsten Müller, Jens Brauer, Carsten Bogler, Johannes Stelzer, and Stefan Kiebel for methodological support, Sven Gutekunst for programming the presentation, Kerstin Flake for help with figures, and Jonas Obleser for helpful discussion. This work was supported by a grant from the German Ministry of Education and Research (BMBF; Grant 01GW0773).</p>
</ack>
<fn-group>
<fn id="fn1">
<p>
<sup>1</sup>
<uri xlink:type="simple" xlink:href="http://www.mccauslandcenter.sc.edu/mricro/mricron/">http://www.mccauslandcenter.sc.edu/mricro/mricron/</uri>
</p>
</fn>
<fn id="fn2">
<p>
<sup>2</sup>
<uri xlink:type="simple" xlink:href="http://marsbar.sourceforge.net">http://marsbar.sourceforge.net</uri>
</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abrams</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Bhatara</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ryali</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Balaban</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Levitin</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Menon</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Decoding temporal structure in music and speech relies on shared brain resources but elicits different fine-scale spatial patterns</article-title>
.
<source>Cereb. Cortex</source>
<volume>21</volume>
,
<fpage>1507</fpage>
<lpage>1518</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhq198</pub-id>
<pub-id pub-id-type="pmid">21071617</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bode</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Bogler</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Soon</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>Haynes</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The neural encoding of guesses in the human brain</article-title>
.
<source>Neuroimage</source>
<volume>59</volume>
,
<fpage>1924</fpage>
<lpage>1931</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.08.106</pub-id>
<pub-id pub-id-type="pmid">21933719</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boemio</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Fromm</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Braun</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Hierarchical and asymmetric temporal sensitivity in human auditory cortex</article-title>
.
<source>Nat. Neurosci.</source>
<volume>3</volume>
,
<fpage>389</fpage>
<lpage>395</lpage>
<pub-id pub-id-type="doi">10.1038/nn1409</pub-id>
<pub-id pub-id-type="pmid">15723061</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bogler</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Bode</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Haynes</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Decoding successive computational stages of saliency processing</article-title>
.
<source>Curr. Biol.</source>
<volume>21</volume>
,
<fpage>1667</fpage>
<lpage>1671</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2011.08.039</pub-id>
<pub-id pub-id-type="pmid">21962709</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bookheimer</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Functional MRI of language: new approaches to understanding the cortical organization of semantic processing</article-title>
.
<source>Annu. Rev. Neurosci.</source>
<volume>25</volume>
,
<fpage>151</fpage>
<lpage>188</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.neuro.25.112701.142946</pub-id>
<pub-id pub-id-type="pmid">12052907</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brown</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Martinez</surname>
<given-names>M. J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Activation of premotor vocal areas during musical discrimination</article-title>
.
<source>Brain Cogn.</source>
<volume>63</volume>
,
<fpage>59</fpage>
<lpage>69</lpage>
<pub-id pub-id-type="doi">10.1016/j.bandc.2006.08.006</pub-id>
<pub-id pub-id-type="pmid">17027134</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brown</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Martinez</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Parsons</surname>
<given-names>L. M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Music and language side by side in the brain: a PET study of the generation of melodies and sentences</article-title>
.
<source>Eur. J. Neurosci.</source>
<volume>23</volume>
,
<fpage>2791</fpage>
<lpage>2803</lpage>
<pub-id pub-id-type="doi">10.1111/j.1460-9568.2006.04785.x</pub-id>
<pub-id pub-id-type="pmid">16817882</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brown</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ngan</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Liotti</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>A larynx area in the human motor cortex</article-title>
.
<source>Cereb. Cortex</source>
<volume>18</volume>
,
<fpage>837</fpage>
<lpage>845</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhm131</pub-id>
<pub-id pub-id-type="pmid">17652461</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brown</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Weishaar</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Speech is heterometric: the changing rhythms of speech</article-title>
.
<source>Speech Prosody</source>
<volume>100074</volume>
,
<fpage>1</fpage>
<lpage>4</lpage>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Callan</surname>
<given-names>D. E.</given-names>
</name>
<name>
<surname>Kawato</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Parsons</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Speech and song: the role of the cerebellum</article-title>
.
<source>Cerebellum</source>
<volume>6</volume>
,
<fpage>321</fpage>
<lpage>327</lpage>
<pub-id pub-id-type="doi">10.1080/14734220601187733</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Callan</surname>
<given-names>D. E.</given-names>
</name>
<name>
<surname>Tsytsarev</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Hanakawa</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Callan</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Katsuhara</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Fukuyama</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Song and speech: brain regions involved with perception and covert production</article-title>
.
<source>Neuroimage</source>
<volume>31</volume>
,
<fpage>1327</fpage>
<lpage>1342</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.01.036</pub-id>
<pub-id pub-id-type="pmid">16546406</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chang</surname>
<given-names>E. F.</given-names>
</name>
<name>
<surname>Rieger</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Johnson</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Berger</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Barbaro</surname>
<given-names>N. M.</given-names>
</name>
<name>
<surname>Knight</surname>
<given-names>R. T.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Categorical speech representation in human superior temporal gyrus</article-title>
.
<source>Nat. Neurosci.</source>
<volume>13</volume>
,
<fpage>1428</fpage>
<lpage>1433</lpage>
<pub-id pub-id-type="doi">10.1038/nn.2621</pub-id>
<pub-id pub-id-type="pmid">20890293</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eickhoff</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Stephan</surname>
<given-names>K. E.</given-names>
</name>
<name>
<surname>Mohlberg</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Grefkes</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Fink</surname>
<given-names>G. R.</given-names>
</name>
<name>
<surname>Amunts</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Zilles</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data</article-title>
.
<source>Neuroimage</source>
<volume>25</volume>
,
<fpage>1325</fpage>
<lpage>1335</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.12.034</pub-id>
<pub-id pub-id-type="pmid">15850749</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fitch</surname>
<given-names>W. T.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>The biology and evolution of music: a comparative perspective</article-title>
.
<source>Cognition</source>
<volume>100</volume>
,
<fpage>173</fpage>
<lpage>215</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2005.11.009</pub-id>
<pub-id pub-id-type="pmid">16412411</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Foster</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>A role for the intraparietal sulcus in transforming musical pitch information</article-title>
.
<source>Cereb. Cortex</source>
<volume>20</volume>
,
<fpage>1350</fpage>
<lpage>1359</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhp199</pub-id>
<pub-id pub-id-type="pmid">19789184</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Towards a neural basis of auditory sentence processing</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed.)</source>
<volume>6</volume>
,
<fpage>78</fpage>
<lpage>84</lpage>
<pub-id pub-id-type="doi">10.1016/S1364-6613(00)01839-8</pub-id>
<pub-id pub-id-type="pmid">15866191</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>The brain basis of language: from structure to function</article-title>
.
<source>Physiol. Rev.</source>
<volume>9</volume>
,
<fpage>1357</fpage>
<lpage>1392</lpage>
<pub-id pub-id-type="doi">10.1152/physrev.00006.2011</pub-id>
<pub-id pub-id-type="pmid">22013214</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gandour</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Dzemidzic</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Lowe</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tong</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Hsieh</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Satthamnuwong</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Lurito</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Temporal integration of speech prosody is shaped by language experience: an fMRI study</article-title>
.
<source>Brain Lang.</source>
<volume>84</volume>
,
<fpage>318</fpage>
<lpage>336</lpage>
<pub-id pub-id-type="doi">10.1016/S0093-934X(02)00505-9</pub-id>
<pub-id pub-id-type="pmid">12662974</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gandour</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tong</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Talavage</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Dzemidzic</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Lowew</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Hemispheric roles in the perception of speech prosody</article-title>
.
<source>Neuroimage</source>
<volume>23</volume>
,
<fpage>344</fpage>
<lpage>357</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.06.004</pub-id>
<pub-id pub-id-type="pmid">15325382</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gandour</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hsieh</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Weinzapfel</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Van Lancker</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hutchins</surname>
<given-names>G. D.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>A crosslinguistic PET study of tone perception</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>12</volume>
,
<fpage>207</fpage>
<lpage>222</lpage>
<pub-id pub-id-type="doi">10.1162/089892900561841</pub-id>
<pub-id pub-id-type="pmid">10769317</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Giraud</surname>
<given-names>A. L.</given-names>
</name>
<name>
<surname>Kell</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Thierfelder</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Sterzer</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Russ</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Preibisch</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Kleinschmidt</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Contributions of sensory input, auditory search and verbal comprehension to cortical activity during speech processing</article-title>
.
<source>Cereb. Cortex</source>
<volume>14</volume>
,
<fpage>247</fpage>
<lpage>255</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhg124</pub-id>
<pub-id pub-id-type="pmid">14754865</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grefkes</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Fink</surname>
<given-names>G. R.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>The functional organization of the intraparietal sulcus in humans and monkeys</article-title>
.
<source>J. Anat.</source>
<volume>207</volume>
,
<fpage>3</fpage>
<lpage>17</lpage>
<pub-id pub-id-type="doi">10.1111/j.1469-7580.2005.00426.x</pub-id>
<pub-id pub-id-type="pmid">16011542</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Griffiths</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>The planum temporale as a computational hub</article-title>
.
<source>Trends Neurosci.</source>
<volume>25</volume>
,
<fpage>348</fpage>
<lpage>353</lpage>
<pub-id pub-id-type="doi">10.1016/S0166-2236(02)02191-4</pub-id>
<pub-id pub-id-type="pmid">12079762</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gunji</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ishii</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Chau</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kakigi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Pantev</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Rhythmic brain activities related to singing in humans</article-title>
.
<source>Neuroimage</source>
<volume>34</volume>
,
<fpage>426</fpage>
<lpage>434</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.07.018</pub-id>
<pub-id pub-id-type="pmid">17049276</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Halpern</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>When that tune runs through your head: a PET investigation of auditory imagery for familiar melodies</article-title>
.
<source>Cereb. Cortex</source>
<volume>9</volume>
,
<fpage>697</fpage>
<lpage>704</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/9.7.697</pub-id>
<pub-id pub-id-type="pmid">10554992</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hanke</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Halchenko</surname>
<given-names>Y. O.</given-names>
</name>
<name>
<surname>Sederberg</surname>
<given-names>P. B.</given-names>
</name>
<name>
<surname>Hanson</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Haxby</surname>
<given-names>J. V.</given-names>
</name>
<name>
<surname>Pollmann</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>PyMVPA: a python toolbox for multivariate pattern analysis of f(mri) data</article-title>
.
<source>Neuroinformatics</source>
<volume>7</volume>
,
<fpage>37</fpage>
<lpage>53</lpage>
<pub-id pub-id-type="doi">10.1007/s12021-008-9041-y</pub-id>
<pub-id pub-id-type="pmid">19184561</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haxby</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Gobbini</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Furey</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ishai</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Schouten</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Pietrini</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Distributed and overlapping representations of faces and objects in ventral temporal cortex</article-title>
.
<source>Science</source>
<volume>293</volume>
,
<fpage>2425</fpage>
<lpage>2430</lpage>
<pub-id pub-id-type="doi">10.1126/science.1063736</pub-id>
<pub-id pub-id-type="pmid">11577229</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haynes</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Rees</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Predicting the orientation of invisible stimuli from activity in human primary visual cortex</article-title>
.
<source>Nat. Neurosci.</source>
<volume>8</volume>
,
<fpage>686</fpage>
<lpage>691</lpage>
<pub-id pub-id-type="doi">10.1038/nn1445</pub-id>
<pub-id pub-id-type="pmid">15852013</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haynes</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Rees</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Decoding mental states from brain sensitivity in humans</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>7</volume>
,
<fpage>523</fpage>
<lpage>534</lpage>
<pub-id pub-id-type="doi">10.1038/nrn1931</pub-id>
<pub-id pub-id-type="pmid">16791142</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Buchsbaum</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Humphries</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Muftuler</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Auditory-motor interaction revealed by f(mri): speech, music, and working memory in area Spt</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>15</volume>
,
<fpage>673</fpage>
<lpage>682</lpage>
<pub-id pub-id-type="doi">10.1162/089892903322307393</pub-id>
<pub-id pub-id-type="pmid">12965041</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hsieh</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Gandour</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hutchins</surname>
<given-names>G. D.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Functional heterogeneity of inferior frontal gyrus is shaped by linguistic experience</article-title>
.
<source>Brain Lang.</source>
<volume>76</volume>
,
<fpage>227</fpage>
<lpage>252</lpage>
<pub-id pub-id-type="doi">10.1006/brln.2000.2382</pub-id>
<pub-id pub-id-type="pmid">11247643</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Husain</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nachev</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Space and the parietal cortex</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed.)</source>
<volume>11</volume>
,
<fpage>30</fpage>
<lpage>36</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2006.10.011</pub-id>
<pub-id pub-id-type="pmid">17134935</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jamison</surname>
<given-names>H. L.</given-names>
</name>
<name>
<surname>Watkins</surname>
<given-names>K. E.</given-names>
</name>
<name>
<surname>Bishop</surname>
<given-names>D. V. M.</given-names>
</name>
<name>
<surname>Matthews</surname>
<given-names>P. M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Hemispheric specialization for processing auditory nonspeech stimuli</article-title>
.
<source>Cereb. Cortex</source>
<volume>16</volume>
,
<fpage>1266</fpage>
<lpage>1275</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhj068</pub-id>
<pub-id pub-id-type="pmid">16280465</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jeffries</surname>
<given-names>K. J.</given-names>
</name>
<name>
<surname>Fritz</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Braun</surname>
<given-names>A. R.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Words in melody: an H2 15O PET study of brain activation during singing and speaking</article-title>
.
<source>Neuroreport</source>
<volume>14</volume>
,
<fpage>749</fpage>
<lpage>754</lpage>
<pub-id pub-id-type="doi">10.1097/00001756-200304150-00018</pub-id>
<pub-id pub-id-type="pmid">12692476</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kahnt</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Grueschow</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Speck</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Haynes</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Perceptual learning and decision-making in human medial frontal cortex</article-title>
.
<source>Neuron</source>
<volume>70</volume>
,
<fpage>549</fpage>
<lpage>559</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2011.02.054</pub-id>
<pub-id pub-id-type="pmid">21555079</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kleber</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Birbaumer</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Veit</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Trevorrow</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Lotze</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Overt and imagined singing of an italian aria</article-title>
.
<source>Neuroimage</source>
<volume>36</volume>
,
<fpage>889</fpage>
<lpage>900</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2007.02.053</pub-id>
<pub-id pub-id-type="pmid">17478107</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kleber</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Veit</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Birbaumer</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Gruzelier</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lotze</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>The brain of opera singers: experience-dependent changes in functional activation</article-title>
.
<source>Cereb. Cortex</source>
<volume>20</volume>
,
<fpage>1144</fpage>
<lpage>1152</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhp177</pub-id>
<pub-id pub-id-type="pmid">19692631</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Klein</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>A role for the right superior temporal sulcus in categorical perception of musical chords</article-title>
.
<source>Neuropsychologia</source>
<volume>49</volume>
,
<fpage>878</fpage>
<lpage>887</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.01.008</pub-id>
<pub-id pub-id-type="pmid">21236276</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Klein</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Milner</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>A cross-linguistic PET study of tone perception in mandarin chinese and english speakers</article-title>
.
<source>Neuroimage</source>
<volume>13</volume>
,
<fpage>646</fpage>
<lpage>653</lpage>
<pub-id pub-id-type="doi">10.1016/S1053-8119(01)91895-6</pub-id>
<pub-id pub-id-type="pmid">11305893</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Siebel</surname>
<given-names>W. A.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Towards a neural basis of music perception</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed.)</source>
<volume>9</volume>
,
<fpage>578</fpage>
<lpage>584</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2005.10.001</pub-id>
<pub-id pub-id-type="pmid">16271503</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kotz</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Alter</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>von Cramon</surname>
<given-names>Y. D.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>On the lateralization of emotional prosody: an event-related functional MR investigation</article-title>
.
<source>Brain Lang.</source>
<volume>86</volume>
,
<fpage>366</fpage>
<lpage>376</lpage>
<pub-id pub-id-type="doi">10.1016/S0093-934X(02)00532-1</pub-id>
<pub-id pub-id-type="pmid">12972367</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kriegeskorte</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Goebel</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Bandettini</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Information-based functional brain mapping</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>103</volume>
,
<fpage>3863</fpage>
<lpage>3868</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.0600244103</pub-id>
<pub-id pub-id-type="pmid">16537458</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lerdahl</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Jackendoff</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<source>A Generative Theory of Tonal Music</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Alter</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Lohmann</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>von Cramon</surname>
<given-names>D. Y.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>FMRI reveals brain regions mediating slow prosodic modulations in spoken sentences</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>17</volume>
,
<fpage>73</fpage>
<lpage>88</lpage>
<pub-id pub-id-type="doi">10.1002/hbm.10042</pub-id>
<pub-id pub-id-type="pmid">12353242</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Steinhauer</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Alter</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>von Cramon</surname>
<given-names>D. Y.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Brain activity varies with modulation of dynamic pitch variance in sentence melody</article-title>
.
<source>Brain Lang.</source>
<volume>89</volume>
,
<fpage>277</fpage>
<lpage>289</lpage>
<pub-id pub-id-type="doi">10.1016/S0093-934X(03)00350-X</pub-id>
<pub-id pub-id-type="pmid">15068910</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nichols</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Brett</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Andersson</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wager</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Poline</surname>
<given-names>J.-B.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Valid conjunction inference with the minimum statistic</article-title>
.
<source>Neuroimage</source>
<volume>25</volume>
,
<fpage>653</fpage>
<lpage>660</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.12.005</pub-id>
<pub-id pub-id-type="pmid">15808966</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Norman</surname>
<given-names>K. A.</given-names>
</name>
<name>
<surname>Polyn</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Detre</surname>
<given-names>G. J.</given-names>
</name>
<name>
<surname>Haxby</surname>
<given-names>J. V.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Beyond mind-reading: multi-voxel pattern analysis of fMRI data</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed.)</source>
<volume>10</volume>
,
<fpage>424</fpage>
<lpage>430</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2006.07.005</pub-id>
<pub-id pub-id-type="pmid">16899397</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Obleser</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Eisner</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Bilateral speech comprehension reflects differential sensitivity to spectral and temporal features</article-title>
.
<source>J. Neurosci.</source>
<volume>28</volume>
,
<fpage>8116</fpage>
<lpage>8124</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.1290-08.2008</pub-id>
<pub-id pub-id-type="pmid">18685036</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Okada</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Rong</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Venezia</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Matchin</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Hsieh</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Saberi</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Serences</surname>
<given-names>J. T.</given-names>
</name>
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Hierarchical organization of human auditory cortex: evidence from acoustic invariance in the response to intelligible speech</article-title>
.
<source>Cereb. Cortex</source>
<volume>20</volume>
,
<fpage>2486</fpage>
<lpage>2495</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhp318</pub-id>
<pub-id pub-id-type="pmid">20100898</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Özdemir</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Norton</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Schlaug</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Shared and distinct neural correlates of singing and speaking</article-title>
.
<source>Neuroimage</source>
<volume>33</volume>
,
<fpage>628</fpage>
<lpage>635</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.07.013</pub-id>
<pub-id pub-id-type="pmid">16956772</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<source>Language, Music, Syntax, and the Brain</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patterson</surname>
<given-names>R. D.</given-names>
</name>
<name>
<surname>Uppenkamp</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Johnsrude</surname>
<given-names>I. S.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>The processing of temporal and melody information in auditory cortex</article-title>
.
<source>Neuron</source>
<volume>36</volume>
,
<fpage>767</fpage>
<lpage>776</lpage>
<pub-id pub-id-type="doi">10.1016/S0896-6273(02)01060-7</pub-id>
<pub-id pub-id-type="pmid">12441063</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Perry</surname>
<given-names>D. W.</given-names>
</name>
<name>
<surname>Zatorre</surname>
</name>
<name>
<surname>Robert</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Petrides</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Alivisatos</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Localization of cerebral activity during simple singing</article-title>
.
<source>Neuroreport</source>
<volume>10</volume>
,
<fpage>3979</fpage>
<lpage>3984</lpage>
<pub-id pub-id-type="doi">10.1097/00001756-199908020-00035</pub-id>
<pub-id pub-id-type="pmid">10716244</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Petacchi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Laird</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>P. T.</given-names>
</name>
<name>
<surname>Bower</surname>
<given-names>J. M.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Cerebellum and auditory function: an ALE meta-analysis of functional neuroimaging studies</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>25</volume>
,
<fpage>118</fpage>
<lpage>128</lpage>
<pub-id pub-id-type="doi">10.1002/hbm.20137</pub-id>
<pub-id pub-id-type="pmid">15846816</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Plante</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Creusere</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sabin</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Dissociating sentential prosody from sentence processing: activation interacts with task demands</article-title>
.
<source>Neuroimage</source>
<volume>17</volume>
,
<fpage>401</fpage>
<lpage>410</lpage>
<pub-id pub-id-type="doi">10.1006/nimg.2002.1182</pub-id>
<pub-id pub-id-type="pmid">12482093</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Guillemin</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Thompson</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Fritz</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bavelier</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Braun</surname>
<given-names>A. R.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Auditory lexical decision, categorical perception, and FM direction discriminates differentially engage left and right auditory cortex</article-title>
.
<source>Neuropsychologia</source>
<volume>42</volume>
,
<fpage>183</fpage>
<lpage>200</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2003.07.010</pub-id>
<pub-id pub-id-type="pmid">14644105</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Riecker</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Wildgruber</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Dogil</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Grodd</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Ackermann</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Hemispheric lateralization effects of rhythm implementation during syllable repetitions: an fMRI study</article-title>
.
<source>Neuroimage</source>
<volume>16</volume>
,
<fpage>169</fpage>
<lpage>176</lpage>
<pub-id pub-id-type="doi">10.1006/nimg.2002.1068</pub-id>
<pub-id pub-id-type="pmid">11969327</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Saito</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Ishii</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Yagi</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Tatsumi</surname>
<given-names>I. F.</given-names>
</name>
<name>
<surname>Mizusawa</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Cerebral networks for spontaneous and synchronized singing and speaking</article-title>
.
<source>Neuroreport</source>
<volume>17</volume>
,
<fpage>1893</fpage>
<lpage>1897</lpage>
<pub-id pub-id-type="doi">10.1097/WNR.0b013e328011519c</pub-id>
<pub-id pub-id-type="pmid">17179865</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sammler</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Baird</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Valabreque</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Clement</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Dupont</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Belin</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Samson</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>The relationship of lyrics and tunes in the processing of unfamiliar songs: a functional magnetic resonance adaptation study</article-title>
.
<source>J. Neurosci.</source>
<volume>30</volume>
,
<fpage>3572</fpage>
<lpage>3578</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2751-09.2010</pub-id>
<pub-id pub-id-type="pmid">20219991</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Samson</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Zeffiero</surname>
<given-names>T. A.</given-names>
</name>
<name>
<surname>Toussaint</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Belin</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Stimulus complexity and categorical effects in human auditory cortex: an activation likelihood estimation meta-analysis</article-title>
.
<source>Front. Psychol.</source>
<volume>1</volume>
:
<fpage>241</fpage>
1–23
<pub-id pub-id-type="doi">10.3389/fpsyg.2010.00241</pub-id>
<pub-id pub-id-type="pmid">21833294</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Schmahmann</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Doyon</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Toga</surname>
<given-names>A. W.</given-names>
</name>
<name>
<surname>Petrides</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<source>MRI Atlas of the Human Cerebellum</source>
.
<publisher-loc>San Diego, CA</publisher-loc>
:
<publisher-name>Academic Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schmithorst</surname>
<given-names>V. J.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Separate cortical networks involved in music perception: preliminary functional MRI evidence for modularity of music processing</article-title>
.
<source>Neuroimage</source>
<volume>25</volume>
,
<fpage>444</fpage>
<lpage>451</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.12.006</pub-id>
<pub-id pub-id-type="pmid">15784423</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schön</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Gordon</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Campagne</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Magne</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Astesano</surname>
<given-names>C. J.-L. A.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Similar cerebral networks in language, music and song perception</article-title>
.
<source>Neuroimage</source>
<volume>51</volume>
,
<fpage>450</fpage>
<lpage>461</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.02.023</pub-id>
<pub-id pub-id-type="pmid">20156575</pub-id>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schönwiesner</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Depth electrode recording show double dissociation between pitch processing in lateral Heschl’s gyrus and sound onset processing in medial Heschl’s gyrus</article-title>
.
<source>Exp. Brain Res.</source>
<volume>187</volume>
,
<fpage>97</fpage>
<lpage>105</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-008-1286-z</pub-id>
<pub-id pub-id-type="pmid">18236034</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Seidner</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Wendler</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1978</year>
).
<source>Die Sängerstimme</source>
.
<publisher-loc>Berlin</publisher-loc>
:
<publisher-name>Henschel Verlag</publisher-name>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stoodley</surname>
<given-names>C. J.</given-names>
</name>
<name>
<surname>Schmahmann</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Functional topography in the human cerebellum: a meta-analysis of neurimaging studies</article-title>
.
<source>Neuroimage</source>
<volume>44</volume>
,
<fpage>489</fpage>
<lpage>501</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2008.08.039</pub-id>
<pub-id pub-id-type="pmid">18835452</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sundberg</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1970</year>
).
<article-title>Formant structure and articulation of spoken and sung vowels</article-title>
.
<source>Folia Phoniatr. (Basel)</source>
<volume>22</volume>
,
<fpage>28</fpage>
<lpage>48</lpage>
<pub-id pub-id-type="doi">10.1159/000263365</pub-id>
<pub-id pub-id-type="pmid">5430062</pub-id>
</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Escoffier</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Bigand</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Lalitte</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>von Cramon</surname>
<given-names>D. Y.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Cognitive priming in sung and instrumental music: activation of inferior frontal cortex</article-title>
.
<source>Neuroimage</source>
<volume>31</volume>
,
<fpage>1771</fpage>
<lpage>1782</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.02.028</pub-id>
<pub-id pub-id-type="pmid">16624581</pub-id>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tusche</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bode</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Haynes</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Neural responses to unattended products predict later consumer choices</article-title>
.
<source>J. Neurosci.</source>
<volume>30</volume>
,
<fpage>8024</fpage>
<lpage>8031</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0064-10.2010</pub-id>
<pub-id pub-id-type="pmid">20534850</pub-id>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Jennings</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Analysis of the spectral envelope of sounds by the human brain</article-title>
.
<source>Neuroimage</source>
<volume>24</volume>
,
<fpage>1052</fpage>
<lpage>1057</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.10.031</pub-id>
<pub-id pub-id-type="pmid">15670682</pub-id>
</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Westbury</surname>
<given-names>C. F.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Quantitative variability in the planum temporale: a probability map</article-title>
.
<source>Cereb. Cortex</source>
<volume>9</volume>
,
<fpage>392</fpage>
<lpage>405</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/9.4.392</pub-id>
<pub-id pub-id-type="pmid">10426418</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wildgruber</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Ackermann</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Klose</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Kardatzki</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Grodd</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Functional lateralization of speech production at primary motor cortex: a fMRI study</article-title>
.
<source>Neuroreport</source>
<volume>7</volume>
,
<fpage>2791</fpage>
<lpage>2795</lpage>
<pub-id pub-id-type="doi">10.1097/00001756-199611040-00077</pub-id>
<pub-id pub-id-type="pmid">8981469</pub-id>
</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wildgruber</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Pihan</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Ackermann</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Erb</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Grodd</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Dynamic brain activation during processing of emotional intonation: influence of acoustic parameters, emotional valence, and sex</article-title>
.
<source>Neuroimage</source>
<volume>15</volume>
,
<fpage>856</fpage>
<lpage>869</lpage>
<pub-id pub-id-type="doi">10.1006/nimg.2001.0998</pub-id>
<pub-id pub-id-type="pmid">11906226</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilson</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Saygin</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Sereno</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Iacoboni</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Listening to speech activates motor areas involved in speech production</article-title>
.
<source>Nature</source>
<volume>7</volume>
,
<fpage>701</fpage>
<lpage>702</lpage>
</mixed-citation>
</ref>
<ref id="B75">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Gjedde</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Lateralization of phonetic and pitch discrimination in speech processing</article-title>
.
<source>Science</source>
<volume>256</volume>
,
<fpage>846</fpage>
<lpage>849</lpage>
<pub-id pub-id-type="doi">10.1126/science.1589767</pub-id>
<pub-id pub-id-type="pmid">1589767</pub-id>
</mixed-citation>
</ref>
<ref id="B76">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Halpern</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Bouffard</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Mental reversal of imagined melodies: a role for the posterior parietal cortex</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>22</volume>
,
<fpage>775</fpage>
<lpage>789</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2009.21239</pub-id>
<pub-id pub-id-type="pmid">19366283</pub-id>
</mixed-citation>
</ref>
<ref id="B77">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Neural specializations for tonal processing</article-title>
.
<source>Ann. N. Y. Acad. Sci.</source>
<volume>930</volume>
,
<fpage>193</fpage>
<lpage>210</lpage>
<pub-id pub-id-type="doi">10.1111/j.1749-6632.2001.tb05734.x</pub-id>
<pub-id pub-id-type="pmid">11458830</pub-id>
</mixed-citation>
</ref>
<ref id="B78">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Belin</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Spectral and temporal processing in human auditory cortex</article-title>
.
<source>Cereb. Cortex</source>
<volume>11</volume>
,
<fpage>946</fpage>
<lpage>953</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/11.10.946</pub-id>
<pub-id pub-id-type="pmid">11549617</pub-id>
</mixed-citation>
</ref>
<ref id="B79">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Belin</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Penhune</surname>
<given-names>V. B.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Structure and function of auditory cortex: music and speech</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed.)</source>
<volume>6</volume>
,
<fpage>37</fpage>
<lpage>46</lpage>
<pub-id pub-id-type="doi">10.1016/S1364-6613(00)01816-7</pub-id>
<pub-id pub-id-type="pmid">11849614</pub-id>
</mixed-citation>
</ref>
<ref id="B80">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Neural mechanisms underlying melodic perception and memory for pitch</article-title>
.
<source>J. Neurosci.</source>
<volume>14</volume>
,
<fpage>1908</fpage>
<lpage>1919</lpage>
<pub-id pub-id-type="pmid">8158246</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
</country>
</list>
<tree>
<country name="Allemagne">
<noRegion>
<name sortKey="Merrill, Julia" sort="Merrill, Julia" uniqKey="Merrill J" first="Julia" last="Merrill">Julia Merrill</name>
</noRegion>
<name sortKey="Bangert, Marc" sort="Bangert, Marc" uniqKey="Bangert M" first="Marc" last="Bangert">Marc Bangert</name>
<name sortKey="Bangert, Marc" sort="Bangert, Marc" uniqKey="Bangert M" first="Marc" last="Bangert">Marc Bangert</name>
<name sortKey="Friederici, Angela D" sort="Friederici, Angela D" uniqKey="Friederici A" first="Angela D." last="Friederici">Angela D. Friederici</name>
<name sortKey="Goldhahn, Dirk" sort="Goldhahn, Dirk" uniqKey="Goldhahn D" first="Dirk" last="Goldhahn">Dirk Goldhahn</name>
<name sortKey="Lohmann, Gabriele" sort="Lohmann, Gabriele" uniqKey="Lohmann G" first="Gabriele" last="Lohmann">Gabriele Lohmann</name>
<name sortKey="Sammler, Daniela" sort="Sammler, Daniela" uniqKey="Sammler D" first="Daniela" last="Sammler">Daniela Sammler</name>
<name sortKey="Turner, Robert" sort="Turner, Robert" uniqKey="Turner R" first="Robert" last="Turner">Robert Turner</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/OperaV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000220 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000220 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    OperaV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:3307374
   |texte=   Perception of Words and Pitch Patterns in Song and Speech
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:22457659" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a OperaV1 

Wicri

This area was generated with Dilib version V0.6.21.
Data generation: Thu Apr 14 14:59:05 2016. Site generation: Thu Jan 4 23:09:23 2024