Serveur d'exploration sur l'opéra

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Song Perception by Professional Singers and Actors: An MEG Study

Identifieur interne : 000003 ( Pmc/Checkpoint ); précédent : 000002; suivant : 000004

Song Perception by Professional Singers and Actors: An MEG Study

Auteurs : Ken Rosslau [Allemagne] ; Sibylle C. Herholz [Allemagne] ; Arne Knief [Allemagne] ; Magdalene Ortmann [Allemagne] ; Dirk Deuster [Allemagne] ; Claus-Michael Schmidt [Allemagne] ; Antoinetteam Zehnhoff-Dinnesen [Allemagne] ; Christo Pantev [Allemagne] ; Christian Dobel [Allemagne]

Source :

RBID : PMC:4749173

Abstract

The cortical correlates of speech and music perception are essentially overlapping, and the specific effects of different types of training on these networks remain unknown. We compared two groups of vocally trained professionals for music and speech, singers and actors, using recited and sung rhyme sequences from German art songs with semantic and/ or prosodic/melodic violations (i.e. violations of pitch) of the last word, in order to measure the evoked activation in a magnetoencephalographic (MEG) experiment. MEG data confirmed the existence of intertwined networks for the sung and spoken modality in an early time window after word violation. In essence for this early response, higher activity was measured after melodic/prosodic than semantic violations in predominantly right temporal areas. For singers as well as for actors, modality-specific effects were evident in predominantly left-temporal lateralized activity after semantic expectancy violations in the spoken modality, and right-dominant temporal activity in response to melodic violations in the sung modality. As an indication of a special group-dependent audiation process, higher neuronal activity for singers appeared in a late time window in right temporal and left parietal areas, both after the recited and the sung sequences.


Url:
DOI: 10.1371/journal.pone.0147986
PubMed: 26863437
PubMed Central: 4749173


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4749173

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Song Perception by Professional Singers and Actors: An MEG Study</title>
<author>
<name sortKey="Rosslau, Ken" sort="Rosslau, Ken" uniqKey="Rosslau K" first="Ken" last="Rosslau">Ken Rosslau</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Herholz, Sibylle C" sort="Herholz, Sibylle C" uniqKey="Herholz S" first="Sibylle C." last="Herholz">Sibylle C. Herholz</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
<affiliation wicri:level="3">
<nlm:aff id="aff003">
<addr-line>German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>German Center for Neurodegenerative Diseases (DZNE), Bonn</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Rhénanie-du-Nord-Westphalie</region>
<region type="district" nuts="2">District de Cologne</region>
<settlement type="city">Bonn</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Knief, Arne" sort="Knief, Arne" uniqKey="Knief A" first="Arne" last="Knief">Arne Knief</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Ortmann, Magdalene" sort="Ortmann, Magdalene" uniqKey="Ortmann M" first="Magdalene" last="Ortmann">Magdalene Ortmann</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff004">
<addr-line>Jean-Uhrmacher-Institute for Clinical ENT-Research, University Hospital Cologne, Cologne, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Jean-Uhrmacher-Institute for Clinical ENT-Research, University Hospital Cologne, Cologne</wicri:regionArea>
<wicri:noRegion>Cologne</wicri:noRegion>
<wicri:noRegion>Cologne</wicri:noRegion>
<wicri:noRegion>Cologne</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Deuster, Dirk" sort="Deuster, Dirk" uniqKey="Deuster D" first="Dirk" last="Deuster">Dirk Deuster</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Schmidt, Claus Michael" sort="Schmidt, Claus Michael" uniqKey="Schmidt C" first="Claus-Michael" last="Schmidt">Claus-Michael Schmidt</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Zehnhoff Dinnesen, Antoinetteam" sort="Zehnhoff Dinnesen, Antoinetteam" uniqKey="Zehnhoff Dinnesen A" first="Antoinetteam" last="Zehnhoff-Dinnesen">Antoinetteam Zehnhoff-Dinnesen</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Pantev, Christo" sort="Pantev, Christo" uniqKey="Pantev C" first="Christo" last="Pantev">Christo Pantev</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Dobel, Christian" sort="Dobel, Christian" uniqKey="Dobel C" first="Christian" last="Dobel">Christian Dobel</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff005">
<addr-line>Department of Otorhinolaryngology, Friedrich-Schiller University Jena, Jena, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Otorhinolaryngology, Friedrich-Schiller University Jena, Jena</wicri:regionArea>
<wicri:noRegion>Jena</wicri:noRegion>
<wicri:noRegion>Jena</wicri:noRegion>
<wicri:noRegion>Jena</wicri:noRegion>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26863437</idno>
<idno type="pmc">4749173</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4749173</idno>
<idno type="RBID">PMC:4749173</idno>
<idno type="doi">10.1371/journal.pone.0147986</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000060</idno>
<idno type="wicri:Area/Pmc/Curation">000060</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000003</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Song Perception by Professional Singers and Actors: An MEG Study</title>
<author>
<name sortKey="Rosslau, Ken" sort="Rosslau, Ken" uniqKey="Rosslau K" first="Ken" last="Rosslau">Ken Rosslau</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Herholz, Sibylle C" sort="Herholz, Sibylle C" uniqKey="Herholz S" first="Sibylle C." last="Herholz">Sibylle C. Herholz</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
<affiliation wicri:level="3">
<nlm:aff id="aff003">
<addr-line>German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>German Center for Neurodegenerative Diseases (DZNE), Bonn</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Rhénanie-du-Nord-Westphalie</region>
<region type="district" nuts="2">District de Cologne</region>
<settlement type="city">Bonn</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Knief, Arne" sort="Knief, Arne" uniqKey="Knief A" first="Arne" last="Knief">Arne Knief</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Ortmann, Magdalene" sort="Ortmann, Magdalene" uniqKey="Ortmann M" first="Magdalene" last="Ortmann">Magdalene Ortmann</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff004">
<addr-line>Jean-Uhrmacher-Institute for Clinical ENT-Research, University Hospital Cologne, Cologne, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Jean-Uhrmacher-Institute for Clinical ENT-Research, University Hospital Cologne, Cologne</wicri:regionArea>
<wicri:noRegion>Cologne</wicri:noRegion>
<wicri:noRegion>Cologne</wicri:noRegion>
<wicri:noRegion>Cologne</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Deuster, Dirk" sort="Deuster, Dirk" uniqKey="Deuster D" first="Dirk" last="Deuster">Dirk Deuster</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Schmidt, Claus Michael" sort="Schmidt, Claus Michael" uniqKey="Schmidt C" first="Claus-Michael" last="Schmidt">Claus-Michael Schmidt</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Zehnhoff Dinnesen, Antoinetteam" sort="Zehnhoff Dinnesen, Antoinetteam" uniqKey="Zehnhoff Dinnesen A" first="Antoinetteam" last="Zehnhoff-Dinnesen">Antoinetteam Zehnhoff-Dinnesen</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Pantev, Christo" sort="Pantev, Christo" uniqKey="Pantev C" first="Christo" last="Pantev">Christo Pantev</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Dobel, Christian" sort="Dobel, Christian" uniqKey="Dobel C" first="Christian" last="Dobel">Christian Dobel</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster</wicri:regionArea>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
<wicri:noRegion>Muenster</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff005">
<addr-line>Department of Otorhinolaryngology, Friedrich-Schiller University Jena, Jena, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Otorhinolaryngology, Friedrich-Schiller University Jena, Jena</wicri:regionArea>
<wicri:noRegion>Jena</wicri:noRegion>
<wicri:noRegion>Jena</wicri:noRegion>
<wicri:noRegion>Jena</wicri:noRegion>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="e-ISSN">1932-6203</idno>
<imprint>
<date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The cortical correlates of speech and music perception are essentially overlapping, and the specific effects of different types of training on these networks remain unknown. We compared two groups of vocally trained professionals for music and speech, singers and actors, using recited and sung rhyme sequences from German art songs with semantic and/ or prosodic/melodic violations (i.e. violations of pitch) of the last word, in order to measure the evoked activation in a magnetoencephalographic (MEG) experiment. MEG data confirmed the existence of intertwined networks for the sung and spoken modality in an early time window after word violation. In essence for this early response, higher activity was measured after melodic/prosodic than semantic violations in predominantly right temporal areas. For singers as well as for actors, modality-specific effects were evident in predominantly left-temporal lateralized activity after semantic expectancy violations in the spoken modality, and right-dominant temporal activity in response to melodic violations in the sung modality. As an indication of a special group-dependent audiation process, higher neuronal activity for singers appeared in a late time window in right temporal and left parietal areas, both after the recited and the sung sequences.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Moreno, S" uniqKey="Moreno S">S Moreno</name>
</author>
<author>
<name sortKey="Marques, C" uniqKey="Marques C">C Marques</name>
</author>
<author>
<name sortKey="Santos, A" uniqKey="Santos A">A Santos</name>
</author>
<author>
<name sortKey="Santos, M" uniqKey="Santos M">M Santos</name>
</author>
<author>
<name sortKey="Castro, Sl" uniqKey="Castro S">SL Castro</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
<author>
<name sortKey="Kasper, E" uniqKey="Kasper E">E Kasper</name>
</author>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D Sammler</name>
</author>
<author>
<name sortKey="Schulze, K" uniqKey="Schulze K">K Schulze</name>
</author>
<author>
<name sortKey="Gunter, T" uniqKey="Gunter T">T Gunter</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N Gosselin</name>
</author>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P Belin</name>
</author>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
<author>
<name sortKey="Plailly, J" uniqKey="Plailly J">J Plailly</name>
</author>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B Tillmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schneider, S" uniqKey="Schneider S">S Schneider</name>
</author>
<author>
<name sortKey="Schonle, Pw" uniqKey="Schonle P">PW Schonle</name>
</author>
<author>
<name sortKey="Altenmuller, E" uniqKey="Altenmuller E">E Altenmuller</name>
</author>
<author>
<name sortKey="Munte, Tf" uniqKey="Munte T">TF Munte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schon</name>
</author>
<author>
<name sortKey="Gordon, R" uniqKey="Gordon R">R Gordon</name>
</author>
<author>
<name sortKey="Campagne, A" uniqKey="Campagne A">A Campagne</name>
</author>
<author>
<name sortKey="Magne, C" uniqKey="Magne C">C Magne</name>
</author>
<author>
<name sortKey="Astesano, C" uniqKey="Astesano C">C Astesano</name>
</author>
<author>
<name sortKey="Anton, Jl" uniqKey="Anton J">JL Anton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Penhune, V" uniqKey="Penhune V">V Penhune</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M Kutas</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lau, Ef" uniqKey="Lau E">EF Lau</name>
</author>
<author>
<name sortKey="Phillips, C" uniqKey="Phillips C">C Phillips</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Petten, C" uniqKey="Van Petten C">C Van Petten</name>
</author>
<author>
<name sortKey="Luka, Bj" uniqKey="Luka B">BJ Luka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maess, B" uniqKey="Maess B">B Maess</name>
</author>
<author>
<name sortKey="Herrmann, Cs" uniqKey="Herrmann C">CS Herrmann</name>
</author>
<author>
<name sortKey="Hahne, A" uniqKey="Hahne A">A Hahne</name>
</author>
<author>
<name sortKey="Nakamura, A" uniqKey="Nakamura A">A Nakamura</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dobel, C" uniqKey="Dobel C">C Dobel</name>
</author>
<author>
<name sortKey="Junghofer, M" uniqKey="Junghofer M">M Junghofer</name>
</author>
<author>
<name sortKey="Breitenstein, C" uniqKey="Breitenstein C">C Breitenstein</name>
</author>
<author>
<name sortKey="Klauke, B" uniqKey="Klauke B">B Klauke</name>
</author>
<author>
<name sortKey="Knecht, S" uniqKey="Knecht S">S Knecht</name>
</author>
<author>
<name sortKey="Pantev, C" uniqKey="Pantev C">C Pantev</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hirschfeld, G" uniqKey="Hirschfeld G">G Hirschfeld</name>
</author>
<author>
<name sortKey="Zwitserlood, P" uniqKey="Zwitserlood P">P Zwitserlood</name>
</author>
<author>
<name sortKey="Dobel, C" uniqKey="Dobel C">C Dobel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geukes, S" uniqKey="Geukes S">S Geukes</name>
</author>
<author>
<name sortKey="Huster, Rj" uniqKey="Huster R">RJ Huster</name>
</author>
<author>
<name sortKey="Wollbrink, A" uniqKey="Wollbrink A">A Wollbrink</name>
</author>
<author>
<name sortKey="Junghofer, M" uniqKey="Junghofer M">M Junghofer</name>
</author>
<author>
<name sortKey="Zwitserlood, P" uniqKey="Zwitserlood P">P Zwitserlood</name>
</author>
<author>
<name sortKey="Dobel, C" uniqKey="Dobel C">C Dobel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
<author>
<name sortKey="Kotz, Sa" uniqKey="Kotz S">SA Kotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wolters, Ch" uniqKey="Wolters C">CH Wolters</name>
</author>
<author>
<name sortKey="Anwander, A" uniqKey="Anwander A">A Anwander</name>
</author>
<author>
<name sortKey="Maess, B" uniqKey="Maess B">B Maess</name>
</author>
<author>
<name sortKey="Macleod, Rs" uniqKey="Macleod R">RS Macleod</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
<author>
<name sortKey="Gunter, T" uniqKey="Gunter T">T Gunter</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
<author>
<name sortKey="Schroger, E" uniqKey="Schroger E">E Schroger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Steinbeis, N" uniqKey="Steinbeis N">N Steinbeis</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maess, B" uniqKey="Maess B">B Maess</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
<author>
<name sortKey="Gunter, Tc" uniqKey="Gunter T">TC Gunter</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
<author>
<name sortKey="Maess, B" uniqKey="Maess B">B Maess</name>
</author>
<author>
<name sortKey="Gunter, Tc" uniqKey="Gunter T">TC Gunter</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hyde, Kl" uniqKey="Hyde K">KL Hyde</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warrier, Cm" uniqKey="Warrier C">CM Warrier</name>
</author>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, Ad" uniqKey="Patel A">AD Patel</name>
</author>
<author>
<name sortKey="Gibson, E" uniqKey="Gibson E">E Gibson</name>
</author>
<author>
<name sortKey="Ratner, J" uniqKey="Ratner J">J Ratner</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Holcomb, Pj" uniqKey="Holcomb P">PJ Holcomb</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Macar, F" uniqKey="Macar F">F Macar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Faita, F" uniqKey="Faita F">F Faita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schon</name>
</author>
<author>
<name sortKey="Magne, C" uniqKey="Magne C">C Magne</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bonnel, Am" uniqKey="Bonnel A">AM Bonnel</name>
</author>
<author>
<name sortKey="Faita, F" uniqKey="Faita F">F Faita</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kolinsky, R" uniqKey="Kolinsky R">R Kolinsky</name>
</author>
<author>
<name sortKey="Lidji, P" uniqKey="Lidji P">P Lidji</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Morais, J" uniqKey="Morais J">J Morais</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schon</name>
</author>
<author>
<name sortKey="Boyer, M" uniqKey="Boyer M">M Boyer</name>
</author>
<author>
<name sortKey="Moreno, S" uniqKey="Moreno S">S Moreno</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Kolinsky, R" uniqKey="Kolinsky R">R Kolinsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kleber, B" uniqKey="Kleber B">B Kleber</name>
</author>
<author>
<name sortKey="Veit, R" uniqKey="Veit R">R Veit</name>
</author>
<author>
<name sortKey="Birbaumer, N" uniqKey="Birbaumer N">N Birbaumer</name>
</author>
<author>
<name sortKey="Gruzelier, J" uniqKey="Gruzelier J">J Gruzelier</name>
</author>
<author>
<name sortKey="Lotze, M" uniqKey="Lotze M">M Lotze</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zarate, Jm" uniqKey="Zarate J">JM Zarate</name>
</author>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dick, F" uniqKey="Dick F">F Dick</name>
</author>
<author>
<name sortKey="Lee, Hl" uniqKey="Lee H">HL Lee</name>
</author>
<author>
<name sortKey="Nusbaum, H" uniqKey="Nusbaum H">H Nusbaum</name>
</author>
<author>
<name sortKey="Price, Cj" uniqKey="Price C">CJ Price</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Herholz, Sc" uniqKey="Herholz S">SC Herholz</name>
</author>
<author>
<name sortKey="Lappe, C" uniqKey="Lappe C">C Lappe</name>
</author>
<author>
<name sortKey="Knief, A" uniqKey="Knief A">A Knief</name>
</author>
<author>
<name sortKey="Pantev, C" uniqKey="Pantev C">C Pantev</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
<author>
<name sortKey="Bouffard, M" uniqKey="Bouffard M">M Bouffard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hubbard, Tl" uniqKey="Hubbard T">TL Hubbard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gordon, Ee" uniqKey="Gordon E">EE Gordon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brodsky, W" uniqKey="Brodsky W">W Brodsky</name>
</author>
<author>
<name sortKey="Rubinstein, B" uniqKey="Rubinstein B">B Rubinstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="H M L Inen, Ms" uniqKey="H M L Inen M">MS Hämäläinen</name>
</author>
<author>
<name sortKey="Ilmoniemi, Rj" uniqKey="Ilmoniemi R">RJ Ilmoniemi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schubert, F" uniqKey="Schubert F">F Schubert</name>
</author>
<author>
<name sortKey="Mueller, W" uniqKey="Mueller W">W Mueller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schubert, F" uniqKey="Schubert F">F Schubert</name>
</author>
<author>
<name sortKey="Mueller, W" uniqKey="Mueller W">W Mueller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peyk, P" uniqKey="Peyk P">P Peyk</name>
</author>
<author>
<name sortKey="De Cesarei, A" uniqKey="De Cesarei A">A De Cesarei</name>
</author>
<author>
<name sortKey="Junghofer, M" uniqKey="Junghofer M">M Junghofer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brockelmann, Ak" uniqKey="Brockelmann A">AK Brockelmann</name>
</author>
<author>
<name sortKey="Steinberg, C" uniqKey="Steinberg C">C Steinberg</name>
</author>
<author>
<name sortKey="Elling, L" uniqKey="Elling L">L Elling</name>
</author>
<author>
<name sortKey="Zwanzger, P" uniqKey="Zwanzger P">P Zwanzger</name>
</author>
<author>
<name sortKey="Pantev, C" uniqKey="Pantev C">C Pantev</name>
</author>
<author>
<name sortKey="Junghofer, M" uniqKey="Junghofer M">M Junghofer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schon</name>
</author>
<author>
<name sortKey="Gordon, Rl" uniqKey="Gordon R">RL Gordon</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kreitewolf, J" uniqKey="Kreitewolf J">J Kreitewolf</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
<author>
<name sortKey="Von Kriegstein, K" uniqKey="Von Kriegstein K">K von Kriegstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Samson, S" uniqKey="Samson S">S Samson</name>
</author>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P Belin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P Belin</name>
</author>
<author>
<name sortKey="Penhune, Vb" uniqKey="Penhune V">VB Penhune</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hagoort, P" uniqKey="Hagoort P">P Hagoort</name>
</author>
<author>
<name sortKey="Brown, Cm" uniqKey="Brown C">CM Brown</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keuper, K" uniqKey="Keuper K">K Keuper</name>
</author>
<author>
<name sortKey="Zwanzger, P" uniqKey="Zwanzger P">P Zwanzger</name>
</author>
<author>
<name sortKey="Nordt, M" uniqKey="Nordt M">M Nordt</name>
</author>
<author>
<name sortKey="Eden, A" uniqKey="Eden A">A Eden</name>
</author>
<author>
<name sortKey="Laeger, I" uniqKey="Laeger I">I Laeger</name>
</author>
<author>
<name sortKey="Zwitserlood, P" uniqKey="Zwitserlood P">P Zwitserlood</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D Sammler</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
<author>
<name sortKey="Ball, T" uniqKey="Ball T">T Ball</name>
</author>
<author>
<name sortKey="Brandt, A" uniqKey="Brandt A">A Brandt</name>
</author>
<author>
<name sortKey="Elger, Ce" uniqKey="Elger C">CE Elger</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Jd" uniqKey="Warren J">JD Warren</name>
</author>
<author>
<name sortKey="Scott, Sk" uniqKey="Scott S">SK Scott</name>
</author>
<author>
<name sortKey="Price, Cj" uniqKey="Price C">CJ Price</name>
</author>
<author>
<name sortKey="Griffiths, Td" uniqKey="Griffiths T">TD Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D Sammler</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
<author>
<name sortKey="Ball, T" uniqKey="Ball T">T Ball</name>
</author>
<author>
<name sortKey="Brandt, A" uniqKey="Brandt A">A Brandt</name>
</author>
<author>
<name sortKey="Grigutsch, M" uniqKey="Grigutsch M">M Grigutsch</name>
</author>
<author>
<name sortKey="Huppertz, Hj" uniqKey="Huppertz H">HJ Huppertz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Herholz, Sc" uniqKey="Herholz S">SC Herholz</name>
</author>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schulze, K" uniqKey="Schulze K">K Schulze</name>
</author>
<author>
<name sortKey="Zysset, S" uniqKey="Zysset S">S Zysset</name>
</author>
<author>
<name sortKey="Mueller, K" uniqKey="Mueller K">K Mueller</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gaab, N" uniqKey="Gaab N">N Gaab</name>
</author>
<author>
<name sortKey="Schlaug, G" uniqKey="Schlaug G">G Schlaug</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jancke, L" uniqKey="Jancke L">L Jancke</name>
</author>
<author>
<name sortKey="Kleinschmidt, A" uniqKey="Kleinschmidt A">A Kleinschmidt</name>
</author>
<author>
<name sortKey="Mirzazade, S" uniqKey="Mirzazade S">S Mirzazade</name>
</author>
<author>
<name sortKey="Shah, Nj" uniqKey="Shah N">NJ Shah</name>
</author>
<author>
<name sortKey="Freund, Hj" uniqKey="Freund H">HJ Freund</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guenther, Fh" uniqKey="Guenther F">FH Guenther</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kleber, B" uniqKey="Kleber B">B Kleber</name>
</author>
<author>
<name sortKey="Birbaumer, N" uniqKey="Birbaumer N">N Birbaumer</name>
</author>
<author>
<name sortKey="Veit, R" uniqKey="Veit R">R Veit</name>
</author>
<author>
<name sortKey="Trevorrow, T" uniqKey="Trevorrow T">T Trevorrow</name>
</author>
<author>
<name sortKey="Lotze, M" uniqKey="Lotze M">M Lotze</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gordon, Rl" uniqKey="Gordon R">RL Gordon</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schon</name>
</author>
<author>
<name sortKey="Magne, C" uniqKey="Magne C">C Magne</name>
</author>
<author>
<name sortKey="Astesano, C" uniqKey="Astesano C">C Astesano</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, CA USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26863437</article-id>
<article-id pub-id-type="pmc">4749173</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0147986</article-id>
<article-id pub-id-type="publisher-id">PONE-D-15-18694</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Physical Sciences</subject>
<subj-group>
<subject>Physics</subject>
<subj-group>
<subject>Acoustics</subject>
<subj-group>
<subject>Bioacoustics</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Bioacoustics</subject>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Linguistics</subject>
<subj-group>
<subject>Speech</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Linguistics</subject>
<subj-group>
<subject>Phonology</subject>
<subj-group>
<subject>Syntax</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Anatomy</subject>
<subj-group>
<subject>Brain</subject>
<subj-group>
<subject>Cerebral Hemispheres</subject>
<subj-group>
<subject>Right Hemisphere</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Anatomy</subject>
<subj-group>
<subject>Brain</subject>
<subj-group>
<subject>Cerebral Hemispheres</subject>
<subj-group>
<subject>Right Hemisphere</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Brain Mapping</subject>
<subj-group>
<subject>Magnetoencephalography</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Research and Analysis Methods</subject>
<subj-group>
<subject>Imaging Techniques</subject>
<subj-group>
<subject>Neuroimaging</subject>
<subj-group>
<subject>Magnetoencephalography</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Neuroimaging</subject>
<subj-group>
<subject>Magnetoencephalography</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Linguistics</subject>
<subj-group>
<subject>Neurolinguistics</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Neurolinguistics</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Anatomy</subject>
<subj-group>
<subject>Brain</subject>
<subj-group>
<subject>Cerebral Hemispheres</subject>
<subj-group>
<subject>Left Hemisphere</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Anatomy</subject>
<subj-group>
<subject>Brain</subject>
<subj-group>
<subject>Cerebral Hemispheres</subject>
<subj-group>
<subject>Left Hemisphere</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Song Perception by Professional Singers and Actors: An MEG Study</article-title>
<alt-title alt-title-type="running-head">Song Perception by Professional Singers and Actors: An MEG Study</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Rosslau</surname>
<given-names>Ken</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Herholz</surname>
<given-names>Sibylle C.</given-names>
</name>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff003">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Knief</surname>
<given-names>Arne</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ortmann</surname>
<given-names>Magdalene</given-names>
</name>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff004">
<sup>4</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Deuster</surname>
<given-names>Dirk</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Schmidt</surname>
<given-names>Claus-Michael</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Zehnhoff-Dinnesen</surname>
<given-names>Antoinetteam</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pantev</surname>
<given-names>Christo</given-names>
</name>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Dobel</surname>
<given-names>Christian</given-names>
</name>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff005">
<sup>5</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff001">
<label>1</label>
<addr-line>Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany</addr-line>
</aff>
<aff id="aff002">
<label>2</label>
<addr-line>Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany</addr-line>
</aff>
<aff id="aff003">
<label>3</label>
<addr-line>German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany</addr-line>
</aff>
<aff id="aff004">
<label>4</label>
<addr-line>Jean-Uhrmacher-Institute for Clinical ENT-Research, University Hospital Cologne, Cologne, Germany</addr-line>
</aff>
<aff id="aff005">
<label>5</label>
<addr-line>Department of Otorhinolaryngology, Friedrich-Schiller University Jena, Jena, Germany</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Snyder</surname>
<given-names>Joel</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>UNLV, UNITED STATES</addr-line>
</aff>
<author-notes>
<fn fn-type="conflict" id="coi001">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con" id="contrib001">
<p>Conceived and designed the experiments: KR SCH CD CMS CP AAZ DD. Performed the experiments: KR SCH. Analyzed the data: KR AK MO CD. Contributed reagents/materials/analysis tools: CP CD. Wrote the paper: KR AAZ CD CMS DD SCH.</p>
</fn>
<corresp id="cor001">* E-mail:
<email>ken.rosslau@uni-muenster.de</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>10</day>
<month>2</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>11</volume>
<issue>2</issue>
<elocation-id>e0147986</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>4</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>11</day>
<month>1</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>© 2016 Rosslau et al</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Rosslau et al</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:type="simple" xlink:href="pone.0147986.pdf"></self-uri>
<abstract>
<p>The cortical correlates of speech and music perception are essentially overlapping, and the specific effects of different types of training on these networks remain unknown. We compared two groups of vocally trained professionals for music and speech, singers and actors, using recited and sung rhyme sequences from German art songs with semantic and/ or prosodic/melodic violations (i.e. violations of pitch) of the last word, in order to measure the evoked activation in a magnetoencephalographic (MEG) experiment. MEG data confirmed the existence of intertwined networks for the sung and spoken modality in an early time window after word violation. In essence for this early response, higher activity was measured after melodic/prosodic than semantic violations in predominantly right temporal areas. For singers as well as for actors, modality-specific effects were evident in predominantly left-temporal lateralized activity after semantic expectancy violations in the spoken modality, and right-dominant temporal activity in response to melodic violations in the sung modality. As an indication of a special group-dependent audiation process, higher neuronal activity for singers appeared in a late time window in right temporal and left parietal areas, both after the recited and the sung sequences.</p>
</abstract>
<funding-group>
<funding-statement>Ken Rosslau is supported by the Deanery of the Medical Faculty of the Westfälische-Wilhelms-University of Muenster; Deutsche Forschungsgemeinschaft and Open Access Publication Fund of University of Muenster. Christian Dobel received support from Deutsche Forschungsgemeinschaft DO 711/7-1. Sibylle C. Herholz received support from Deutsche Forschungsgemeinschaft HE6067/3-1. Christo Pantev received support from Deutsche Forschungsgemeinschaft PA 392/12-2). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<fig-count count="5"></fig-count>
<table-count count="0"></table-count>
<page-count count="18"></page-count>
</counts>
<custom-meta-group>
<custom-meta id="data-availability">
<meta-name>Data Availability</meta-name>
<meta-value>All relevant data are within the paper. We cannot distribute the lyrics and notes of the stimulus material, because it is still subject of German copyright laws. The stimuli lines came from the German Lied cycles: "Die schöne Müllerin, Winterreise, Schwanengesang" of Franz Schubert. If requested, we will provide additional information for the stimuli such as line numbers by contact with the corresponding author.</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
<notes>
<title>Data Availability</title>
<p>All relevant data are within the paper. We cannot distribute the lyrics and notes of the stimulus material, because it is still subject of German copyright laws. The stimuli lines came from the German Lied cycles: "Die schöne Müllerin, Winterreise, Schwanengesang" of Franz Schubert. If requested, we will provide additional information for the stimuli such as line numbers by contact with the corresponding author.</p>
</notes>
</front>
<body>
<sec sec-type="intro" id="sec001">
<title>Introduction</title>
<p>Recent research has increased our knowledge about the organization of neuronal networks for speech and music perception and suggests the presence of training-induced and interdependent modulation of musical and speech abilities [
<xref rid="pone.0147986.ref001" ref-type="bibr">1</xref>
]. This research is based on many studies on brain morphology, training effects and receptive/expressive functions of music and speech processing comparing instrumental musicians to novices [
<xref rid="pone.0147986.ref002" ref-type="bibr">2</xref>
<xref rid="pone.0147986.ref005" ref-type="bibr">5</xref>
]. In contrast, there is little knowledge about training or profession-specific cortical processing of speech and music as, for example, in professional voice experts such as actors in comparison to opera singers. Both of these groups have comparable levels of voice training and practice on stage, but with different emphasis on the specific type of voice training. Therefore, in order to investigate the training-related effects on brain function, it is much more informative to compare behavioral and neurophysiological data with respect to these fields of expertise, than to relate these groups to untrained novices. We regard singers and actors as a unique group of artists for comparative purposes, because both need to work on artistic expression through their voices as well as bodies and with high demands on self-perception. Furthermore, the comparison of these groups, using both a spoken and sung stimulus modality, addresses with very high specificity the questions of modality dependence, group dependence or any interaction in processing linguistic and musical content. Based on similar semantic and syntactic rule systems in language and in music, a complex and intertwined cerebral network for language and music processing is assumed [
<xref rid="pone.0147986.ref003" ref-type="bibr">3</xref>
,
<xref rid="pone.0147986.ref005" ref-type="bibr">5</xref>
<xref rid="pone.0147986.ref007" ref-type="bibr">7</xref>
]. Nevertheless, there is to date no study comparing two groups of experts who developed their expertise with very similar amounts and types of training with the same stimulus material, once in a sung and once in a recited modality.</p>
<sec id="sec002">
<title>An intertwined network for processing music and language</title>
<p>Previous research transferred experimental approaches, which were established to investigate different levels of language processing, into the field of music processing. Most notable were designs employing semantic and syntactic expectancy violations. Semantic expectancy violations in language result in a N400 component generated mainly in the left superior temporal lobe, as evidenced by electro- and magnetoencephalographic (EEG and MEG) studies [
<xref rid="pone.0147986.ref008" ref-type="bibr">8</xref>
<xref rid="pone.0147986.ref014" ref-type="bibr">14</xref>
]. Syntactic violations in spoken sentences are reflected in an early negative electrophysiological component (ELAN; early left anterior negativity) and/or a late positive centro-parietal component (P600) over left anterior temporal and left inferior frontal regions [
<xref rid="pone.0147986.ref015" ref-type="bibr">15</xref>
]. Early left anterior magnetic fields after syntactical violation were also detected by MEG [
<xref rid="pone.0147986.ref016" ref-type="bibr">16</xref>
], while there is to date no clear correlate for the electrophysiological P600 component in MEG. Similarly to these findings, semantic and syntactic expectancy violations in musical material elicit negative electrophysiological components in right anterior frontal and superior temporal regions that are homologous to the above-mentioned left-lateralized speech-related correlates [
<xref rid="pone.0147986.ref017" ref-type="bibr">17</xref>
,
<xref rid="pone.0147986.ref018" ref-type="bibr">18</xref>
], i.e. the regions are highly similar, but with different hemispheric dominance. In a magnetoencephalographic study, high neuronal activity after musical syntactical violation was found in temporal regions on both hemispheres [
<xref rid="pone.0147986.ref019" ref-type="bibr">19</xref>
]. Typical “language” regions seemed to be less language-specific than previously thought [
<xref rid="pone.0147986.ref020" ref-type="bibr">20</xref>
]. Still, assuming a relative dominance of hemispheres in musical versus linguistic contexts, right temporal areas are reported to be mainly involved in processing and analyzing musical sequences [
<xref rid="pone.0147986.ref021" ref-type="bibr">21</xref>
<xref rid="pone.0147986.ref023" ref-type="bibr">23</xref>
]. In this vein, several studies investigated pitch violations in music and speech. An increment or decrement of fundamental frequency (final pitch) at the end of a spoken or sung line may represent a prosodic or melodic violation, respectively. Both can be interpreted as a violation of the syntactical rule system, and thus several studies focused on such prosodic/ melodic differentiations. They found evidence for positive centro-parietal and temporal components peaking between 300–600 ms after stimulus onset, as described for syntactical violations [
<xref rid="pone.0147986.ref024" ref-type="bibr">24</xref>
,
<xref rid="pone.0147986.ref025" ref-type="bibr">25</xref>
]. The amplitude of these components depended on the strength of violation (weak or strong) and on the degree of musical education of the participant [
<xref rid="pone.0147986.ref026" ref-type="bibr">26</xref>
,
<xref rid="pone.0147986.ref027" ref-type="bibr">27</xref>
].</p>
<p>In order to test whether simultaneously presented linguistic parameters (represented by semantic violations) and musical parameters (represented by melodic pitch violations at the end of a sung melody line) are processed dependently or independently, a medium is required that combines these two aspects. Comparing different musicians and laymen, Bonnel and coauthors [
<xref rid="pone.0147986.ref028" ref-type="bibr">28</xref>
] prepared excerpts from French operatic songs by manipulating the final word in such a way that it was either semantically congruous (S+) or incongruous (S-) and/ or by manipulating the final pitch of the melody line either in (P+) or out of key (P-). The simultaneous appearance of both an N400 and a P600 component in response to the combined prosodic and semantic violated condition (S-P-) suggested that semantic and syntactic aspects of language and music were processed by independent systems and did not compete for the same pool of mental resources in musicians and nonmusicians [
<xref rid="pone.0147986.ref028" ref-type="bibr">28</xref>
,
<xref rid="pone.0147986.ref029" ref-type="bibr">29</xref>
]. However, successive studies failed to find this division of labour, and rather presented evidence for more intertwined neuronal networks in bilateral middle and superior temporal gyri as well as inferior and middle frontal gyri, during combined musical and linguistic tasks [
<xref rid="pone.0147986.ref018" ref-type="bibr">18</xref>
,
<xref rid="pone.0147986.ref030" ref-type="bibr">30</xref>
,
<xref rid="pone.0147986.ref031" ref-type="bibr">31</xref>
]. Most of the above-mentioned studies compared the neurophysiological influence of linguistic content in divided sets of stimuli for language and music conditions, respectively. The advantage of using song lines performed by the human voice is that both linguistic and musical information are merged into one ecologically valid acoustic signal. The separation into a recited and a sung version allows a comparison of more linguistically-based and a more musically-based context with the same experimental material, which is a prerequisite for a study of highly professional artists. If there are interactions of semantic and syntactic processing in either a recited or sung modality, professional opera singers, as highly trained musical voice users, and professional actors as highly trained linguistic voice users, represent ideal subjects to search for neurophysiological correlates.</p>
</sec>
<sec id="sec003">
<title>Cognitive and neuronal characteristics for singers and actors</title>
<p>During singing, professional singers display increased activation of bilateral primary somatosensory cortex (where cortical representations of the larynx are situated), inferior parietal lobe and dorsolateral prefrontal cortex, and at a subcortical level, increased activation in the basal ganglia, thalamus and cerebellum compared to nonmusicians. This is is generally interpreted as evidence for training-induced cortical plasticity [
<xref rid="pone.0147986.ref032" ref-type="bibr">32</xref>
,
<xref rid="pone.0147986.ref033" ref-type="bibr">33</xref>
]. To the best of our knowledge, there is only one study investigating training-induced plastic effects as a result of acting training, identifying high activation during speech perception in bilateral premotor regions that are commonly activated by mouth movements [
<xref rid="pone.0147986.ref034" ref-type="bibr">34</xref>
].</p>
<p>Regarding a specialization of higher-order cognitive skills, several findings over the last few years point towards an enhanced quality of auditory imagery in musicians [
<xref rid="pone.0147986.ref035" ref-type="bibr">35</xref>
,
<xref rid="pone.0147986.ref036" ref-type="bibr">36</xref>
]. Musical imagery preserves many structural and temporal properties of auditory stimuli and can facilitate auditory discrimination by, for instance, the integration of semantically interpreted information and expectancies [
<xref rid="pone.0147986.ref037" ref-type="bibr">37</xref>
]. A special form of imagery, so called “audiation”, is described as an internal analog of aural music perception [
<xref rid="pone.0147986.ref038" ref-type="bibr">38</xref>
] and interpreted as a mental representation of music by internally “hearing” a music sequence that has just been auditorily or visually presented. It represents an integration of auditory, visual and/ or motor imagery in the brain and results in a cross-modal encoding of a unisensory input [
<xref rid="pone.0147986.ref039" ref-type="bibr">39</xref>
]. In line with such a description, audiation should be especially developed in musicians. However, the neural correlates of audiation haven’t been investigated so far.</p>
</sec>
<sec id="sec004">
<title>Aim and approach of the current study</title>
<p>The aim of our study was to investigate music and speech perception by voice experts, professional singers and actors, in order to disentangle the training-induced cortical networks for processing music and speech. To measure brain activity we used magnetoencephalography (MEG) due to its high sensitivity to time and its moderate to high accuracy in determining the underlying sources of brain activity [
<xref rid="pone.0147986.ref040" ref-type="bibr">40</xref>
]. This is the first study comparing these groups by using complex, but ecologically valid stimulus material in recited and sung modalities. Although all native-speaking participants are per definition linguistically highly educated in speaking their mother tongue, we consider it important to compare singers with actors in order to control for long-term voice training. This would be not the case in participants without such experience. To stimulate at a high artistic level, we used rhyme sequences of German art songs by Franz Schubert. Importantly, the lyrical basis for these songs is similar in structure to material that actors recite in a dramatic performance. One characteristic of art songs is a close integration of music and lyrics, typically without singing several notes in one syllable, a frequent feature in operatic arias. Since the songs are based on poetry, it is feasible to present the material both in a spoken and in a sung condition, thus comparing modality-specific processing of semantic and syntactic aspects.</p>
<p>Based on the nature of the semantic and melodic/ prosodic violations and because of using a sung and spoken modality, we expected increased activity upon violations in temporal areas in both hemispheres. Additionally, we predicted higher sensitivity for melodic/prosodic violations in singers and vice versa for semantic violations in actors. If singers indeed display more long lasting representations of auditory stimuli after their offset (i.e. what was above called audiation), we expect long-lasting activity in temporal regions, possibly with a right-hemispheric dominance, due to musical training.</p>
</sec>
</sec>
<sec sec-type="materials|methods" id="sec005">
<title>Material and Methods</title>
<sec id="sec006">
<title>Participants</title>
<p>Fifteen professional singers (mean age = 29.2 years; 8 female) and 15 professional actors (mean age = 32.4 years; 9 female) took part in the experiment. The singers and actors had passed a university final qualifying examination after at least 4 years of training. At the time of the study, they currently practiced singing or acting on stage or in rehearsal for a minimum of 4 hours a day. The actors had not received any additional musical education besides compulsory music classes in high school and the singers had received articulation training for one year at the beginning of their university studies.</p>
<p>As an inclusion criterion all participants were familiar with the German art song cycles “Beautiful Miller Girl” and “Winter Journey” by the composer Franz Schubert, but had not practiced or performed them in auditions or on stage. All participants were right handed, free of neurological or psychiatric disorders, native speakers of German and had normal hearing thresholds as assessed by clinical audiometry. All gave written consent to participate in the study. The study protocol was approved by the local ethics committee of the Medical Faculty. The study was conducted according to the Declaration of Helsinki.</p>
</sec>
<sec id="sec007">
<title>Stimulus material</title>
<p>As indicated above, we used 30 short excerpts of songs from the romantic epoch (music by the German composer Franz Schubert, lyrics by Wilhelm Mueller) from the cycles “Beautiful Miller Girl” and “Winter Journey” for stimulation in the experiment [
<xref rid="pone.0147986.ref041" ref-type="bibr">41</xref>
,
<xref rid="pone.0147986.ref042" ref-type="bibr">42</xref>
]. The excerpts consisted of a rhyming couplet with a monosyllabic ending and the original melody line composed by Franz Schubert. For all excerpts, one version sung
<italic>a capella</italic>
(without accompaniment) and one recited, spoken version were recorded using a high-quality recording system and microphone (lingwaves software/ Wevosys 2010; microphone: 322 Datalogger, MK:216/ Voltcraft). For the recording, the same professionally educated singer sang and recited all excerpts. The duration of sung phrases ranged from 4.5 to 6.8 seconds (mean average 5.7 sec.), and the duration of recited phrases ranged from 4.2 to 5.4 seconds (mean average 4.8 sec.). In the same way, the mean length of the recited last words (452 ± 26 ms) differed from the mean of length of sung last words (710 ± 44 ms). For each modality (sung and spoken), the 30 excerpts were presented in four different conditions, resulting in 120 stimuli per modality. In the first condition, the original line was presented in the correct sung/ recited version (S+P+, for correct
<bold>s</bold>
emantic and
<bold>p</bold>
itch information). In the second condition, the pitch of the last word was decreased or increased in the sung modality by a semitone out of key in compliance with the original melodic contour (melodic violation), and in the spoken modality, by an increase of fundamental frequency of 35% (prosodic violation), which represents a violation of the expected decrease of prosody for a clause of statement (S+P-). A different relation between the deviation of a fundamental frequency in music and speech was first claimed by Besson et al. [
<xref rid="pone.0147986.ref027" ref-type="bibr">27</xref>
]. The authors described the deviation of 1/5 tone in music and 35% increase of the fundamental frequency (quart interval) in speech to be appropriate for a “weak” incongruity, because it is much harder to recognize such a difference in speech compared to the harmonic context of music. This is probably due to the strong harmonic rule system for melody compared to the only sensational rule system for speech prosody. After piloting our stimulus material with a group of healthy musical students, we decided an interval of ½ tone compared to 35% increase of fundamental frequency in speech to be more appropriate for our study. In the third condition, the original last word of the excerpt was exchanged by a semantically incongruent word (S-P+). These semantically incongruent monosyllabic words fulfilled the original rhyme scheme. In the fourth condition, we presented a double incongruency at the end of the excerpt with an incorrect pitch ending (syntactic/ prosodic violation) and a semantically incongruent last word (S-P-). All pitch manipulations in the sung and spoken modality were performed on the original, digitally stored sound files using the software PRAAT (Version 5.3.34) to ensure the correct pitch violation (
<xref ref-type="fig" rid="pone.0147986.g001">Fig 1</xref>
).</p>
<fig id="pone.0147986.g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147986.g001</object-id>
<label>Fig 1</label>
<caption>
<title>Example of a varied rhyme-couplet.</title>
<p>Example of a varied rhyme-couplet from the song cycle “Beautiful Miller Girl” by Franz Schubert, poems by Wilhelm Müller:
<italic>Meine Laute hab ich gehängt an die</italic>
<bold>
<italic>Wand</italic>
</bold>
,
<italic>hab sie umschlungen mit einem grünen</italic>
<bold>
<italic>Band</italic>
</bold>
(semantic correct) /
<bold>
<italic>Land</italic>
</bold>
<italic>(incorrect)</italic>
, (english translation by Emily Ezust: My lute I´ve hung upon the wall, I´ve tied it there with a green
<bold>band/ land</bold>
). Semantic variation of the last word and/ or prosodic/ melodic variation of the final pitch resulted in 4 different conditions (S+: correct semantic sense, S-: incorrect semantic sense, P+: correct fundamental frequency/ final pitch, P-: incorrect fundamental frequency/ final pitch) for both spoken and sung modalities.</p>
</caption>
<graphic xlink:href="pone.0147986.g001"></graphic>
</fig>
</sec>
<sec id="sec008">
<title>Procedure</title>
<p>Subjects were comfortably seated in a magnetically shielded room and their head position was stabilized in the MEG scanner using soft pads. All stimuli were presented binaurally 60 dB above the individual hearing threshold of each ear, which was determined at the beginning of the experiment with an accuracy of at least 5 dB by reduction of the sound of one stimulus sentence to the minimal individual sound level both for the sung and spoken modality. Instructions, visual prompts and feedback were presented via back-projection on a screen in front of the subject that was adjusted in height to be comfortably visible for the subject.</p>
<p>Subjects worked through the experimental instructions and eight practice trials at their own pace. Stimuli used for the practice trials were not used again in the subsequent experiment. The 240 stimuli were presented in 4 experimental runs per 60 stimuli using the software Presentation (Neurobehavioral Systems Inc., Albany, CA, USA). Within each run, stimuli were presented in a pseudo-randomized order. Occurrence of the different versions of each excerpt were distributed equally across four runs, with the constraint that two versions of the same excerpt did not occur subsequently and not more than 3 stimuli from the same condition were played consecutively.</p>
<p>After the presentation of each stimulus, subjects had to judge the accuracy of the semantic congruence of the last word and the accuracy of the pitch of the last word, both for sung and for spoken stimuli. Subjects responded by means of successive button presses and were visually prompted to give their responses, with the prompt for the first judgment appearing 1500 ms after stimulus offset. They were instructed to respond within 2000 ms. The next prompt or trial was presented automatically after the subject’s response or after a time lapse of 5 seconds. The order of the prompts (semantic and pitch judgments) and the assignment of buttons to responses (correct and incorrect) were balanced across participants and remained the same for each subject throughout the experiment. Each run took around 15 minutes and the entire measurement process including instructions, practice trials and pauses between runs, lasted about 90 minutes.</p>
<p>After the measurements, subjects took part in a semi-structured interview to summarize how attention-demanding they had found the tasks to be. For evaluation purposes, the answers were classified into three different categories from “low”, “moderate” and “high” level of attention.</p>
</sec>
<sec id="sec009">
<title>MEG recordings and data analysis</title>
<p>MEG signals were recorded continuously, using a whole-head device with 275 first-order axial SQUID gradiometers (Omega 275, CTF, VSM MedTech, Coquitlam, Canada), filtered online (150 Hz low-pass for aliasing, 50 Hz notch for European power grid) and sampled at 600 Hz. The continuous data were then band-pass filtered offline in a 0.1–48 Hz range, using a zero-phase second-order Butterworth filter. The triggers for data analysis were set at the beginning of the last word for each stimulus. For each trial, epochs ranging from 200 ms before acoustic trigger at the word onset to 2000 ms after onset were extracted from the continuous data. Artifact rejection and pre-processing, with baseline correction of the first 100 ms and rejection of sensor activity higher than 3000 fT, was performed with EMEGS 2.3 [
<xref rid="pone.0147986.ref043" ref-type="bibr">43</xref>
] running under MATLAB 7 SP3 (The MathWorks, Natick, MA, USA). Epochs for each condition were averaged. Individual averages were standardized on the mean MEG sensor configuration across all participants and runs, and thus corrected for differing head positions of the participants within the MEG scanner. The amplitude and distribution of event-related magnetic fields depended on the individual head position within the sensor coordinate system, as well as individual head geometry, especially head size. An estimation of the underlying neuronal generators, such as the L2- Minimum-Norm Estimate, (L2-MNE; [
<xref rid="pone.0147986.ref040" ref-type="bibr">40</xref>
] however, is independent of such individual factors and enables statistical tests across participant groups and conditions. The L2-MNE served as an inverse-distributed source modeling method for examining the cortical generator of the magnetic field activity without a priori assumptions about the location and/or number of current sources. The present analyses were based on an isotropic spherical head model with 197 dipolar sources evenly distributed on an inner spherical shell. The sphere position and radius were estimated in order to optimally fit the digitized head shape of each participant. Across all participants and conditions, a Tikhonov regularization parameter of k = 0.2 was applied.</p>
<p>Dipole strength at a given dipole site was obtained as the square root of the sum of squared L2 values for each of the two tangential orientations, for each time point for each data set. The L2-MNE amplitudes were analyzed with a point-wise repeated measures ANOVA with the within-subject factor CONDITION and the between-subject factor GROUP separately for the spoken and sung modalities. To avoid false positives, a significance criterion of p<0.01 was used, and significant effects were considered only when they were observed for at least 10 consecutive sampling points (e.g. around 15 ms) and at least 10 neighboring dipoles. The statistical parametric F values were mapped on a standard cortical surface in time slots of 50 ms in order to display the origin of effects in more detail. Such foci of high activity were further analyzed by averaging the mean activity as different clusters of both hemispheres. This type of analysis for multichannel recordings (EEG and MEG) has become an established procedure for sensor and source space (recent studies include [
<xref rid="pone.0147986.ref012" ref-type="bibr">12</xref>
,
<xref rid="pone.0147986.ref044" ref-type="bibr">44</xref>
].</p>
<p>For a comparison of local clusters with high activity, the relevant time windows for defining clusters were based on significant activation differences of the point-wise ANOVA. In line with the literature, we detected clusters of activity in an interval between 200 and 500 ms after onset of the last word in both temporal lobes, and we described foci of activity in this time window as “early” components. Because we also detected activation peaks in a second interval between 600 and 1700 ms with a significant dependence on the between-factor GROUP, we described these activations as “late” components. Even though we defined the clusters in a data-driven manner, the dipoles in these clusters overlapped substantially comparing the hemispheres separately for each modality (spoken modality: left hemisphere 22 dipoles and right hemisphere 22 dipoles, with 18 corresponding dipoles; sung modality: left hemisphere 16 dipoles and right hemisphere 19 dipoles, with 11 corresponding dipoles). For “early” cluster comparison, we calculated a repeated measures ANOVA including within-subject factors SEMANTIC VIOLATION, MELODIC/ PROSODIC VIOLATION, HEMISPHERE and between-subjects factor GROUP. Because no corresponding dipoles were found in the “late” clusters, we calculated separately for each hemisphere a repeated measures ANOVA, including within-subject factors SEMANTIC VIOLATION, MELODIC/ PROSODIC VIOLATION and between-subject factor GROUP.</p>
<p>All analyses were conducted separately for the sung and spoken modality to minimize bias caused by differences in length of the last word for the sung and spoken versions and the different time window of resulting magnetic fields. Pairwise post-hoc comparisons between significant and relevant condition pairs were computed and thresholded by Bonferroni correction.</p>
</sec>
<sec id="sec010">
<title>Analysis of behavioral data</title>
<p>In order to evaluate the behavioural data, we computed the mean values of hits and correct rejections concerning melodic/ prosodic correctness of the last word as the accuracy of pitches and concerning the semantic congruence of the last word, as the accuracy of words, in order to obtain more detailed information of the kind of mistakes associated with the different conditions. In the same way as for the MEG data, performance scores were analyzed using a repeated measures ANOVA with the factors CONDITION and GROUP separately for the sung and spoken modalities and additionally, as mentioned before, each for accuracy of pitches and accuracy of words. The ANOVA results are reported when significant at p < = 0.05. All p values for results were adjusted, when necessary, with the Greenhouse-Geisser epsilon correction for nonsphericity. Pairwise post-hoc comparisons between significant and relevant condition pairs were computed and thresholded as before by Bonferroni correction.</p>
</sec>
</sec>
<sec sec-type="results" id="sec011">
<title>Results</title>
<sec id="sec012">
<title>Behavioural data</title>
<sec id="sec013">
<title>Spoken modality</title>
<p>The results concerning the accuracy of the ending pitch of the line (melodic/ prosodic violation) revealed a significant main effect of CONDITION (F
<sub>(1.75, 49.11)</sub>
= 6.33, p = .005), and no main effect of and interaction with GROUP. Comparing the significant mean values for post-hoc analysis (
<xref ref-type="fig" rid="pone.0147986.g002">Fig 2</xref>
) showed that the performance was lowest for the condition with double incongruency S-P- (24.7 ± 5.2 correct responses). This condition differed from all other three (S+P-: 26.5 ± 3.6 c.r., S-P+: 27.1 ± 2.9 c.r. and S+P+: 28.0 ± 2.8 c.r.; post-hocs: S+P+ vs. S-P-, p = .003; S+P- vs. S-P-, p = .011; S-P+ vs. S-P-, p = .020), but they were not significantly different from each other.</p>
<fig id="pone.0147986.g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147986.g002</object-id>
<label>Fig 2</label>
<caption>
<title>Mean values of accuracy of pitches and accuracy of words.</title>
<p>Mean values of accuracy of pitches and accuracy of words (max. 30) for all conditions (S+: correct semantic sense, S-: incorrect semantic sense, P+: correct fundamental frequency/ final pitch, P-: incorrect fundamental frequency/ final pitch) for both spoken and sung modalities. Error bars indicate one standard deviation.</p>
</caption>
<graphic xlink:href="pone.0147986.g002"></graphic>
</fig>
<p>Identifying the semantic accuracy of the last words (accuracy of words) in the spoken modality yielded a significant main effect CONDITION (F
<sub>(1.69, 47.36)</sub>
= 4.24, p = .026), and again, no main effect of and interaction with GROUP. Comparing the significant mean values for post-hoc analysis showed that the performance was nearly the same for all conditions that contained an expectancy violation (S+P-: 25.6 ± 2.4 c.r., S-P+: 26.1 ± 3.4 c.r., S-P-: 26.3 ± 3.0 c.r.), but they all yielded lower accuracy compared to the correct recited line (S+P+: 27.7 ± 1.4 c.r.; post-hocs: S+P+ vs. S+P-, p < .001; S+P+ vs. S-P+, p = .024; S+P+ vs. S-P-, p = .029).</p>
</sec>
<sec id="sec014">
<title>Sung modality</title>
<p>In the sung modality, the singers reached a significantly higher accuracy regarding the judgment of pitches (25.0 ± 1.3 c.r) compared to actors (22.4 ± 1.6 c.r.; main effect GROUP: F
<sub>(1, 28)</sub>
= 11.05, p = .002) without an interaction with CONDITION (
<xref ref-type="fig" rid="pone.0147986.g003">Fig 3</xref>
). Also, we found a significant main effect of CONDITION (F
<sub>(1.57, 44.14)</sub>
= 14.33, p < .001). The post-hoc analysis (
<xref ref-type="fig" rid="pone.0147986.g002">Fig 2</xref>
) revealed a significant difference in recognizing the correct pitch combined with a semantic violation (S-P+: 19.4 ± 5.1 c.r.) compared to the other conditions (S+P-: 24.6 ± 4.8 c.r., S-P-: 24.7 ± 4.6 c.r., S+P+: 26.2 ± 2.6 c.r.; all post-hoc comparisons with S-P+ ≤ .001).</p>
<fig id="pone.0147986.g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147986.g003</object-id>
<label>Fig 3</label>
<caption>
<title>Mean values of accuracy of pitches and words in the sung modality.</title>
<p>Mean values of accuracy of pitches and accuracy of words (max. 30) in the sung modality in comparison of singers and actors. Error bars indicate the standard error.</p>
</caption>
<graphic xlink:href="pone.0147986.g003"></graphic>
</fig>
<p>The accuracy of semantic sense of the last word in the sung modality again showed a significant main effect CONDITION (F
<sub>(1.96, 54.88)</sub>
= 3.19; p = .048) with no interaction with or main effect of GROUP. The post-hoc analysis of the main effect CONDITION (
<xref ref-type="fig" rid="pone.0147986.g002">Fig 2</xref>
) confirmed the S-P+ condition as most difficult to recognize in the sung modality (26.6 ± 2.5 c.r.), differing significantly from the correct line S+P+ (28.1 ± 1.3 c.r.; S-P+ vs. S+P+, p < .007) and differing in terms of non-significant trends and from S-P- (27.3 ± 2.7 c.r.) and from S+P- (27.6 ± 1.9 c.r.; S-P+ vs. S+P-, p = .051; S-P+ vs. S-P-, p = .054).</p>
</sec>
</sec>
<sec id="sec015">
<title>Group differences with regard to attention required and fatigue</title>
<p>We observed differences between singers and actors at the level of attention reported necessary to perform successfully. Summarized in semi-structured interviews after the measurements, subjects were categorized according to three different levels of effort. In the group of actors, nine subjects reported the need to pay substantial attention during the experiment and felt exhausted at the end. Four subjects described the required attention as “moderate” and two as “low”. In contrast, ten singers reported a kind of “easy-flow” and a “inner rehearsal in the mind” described also in terms of internal repetition of the spoken and sung lines with focus on melodies but also on texts and no need for additional attention. These ten singers reported low attention demands, while four other singers reported a moderate level and one a high level.</p>
</sec>
<sec id="sec016">
<title>Magnetoencephalographic data</title>
<sec id="sec017">
<title>Global power</title>
<p>Inspection of the Global Power of L2-MNE solutions (
<xref ref-type="fig" rid="pone.0147986.g004">Fig 4</xref>
) demonstrated a long interval of high activation, starting around 400 ms after last word onset/ peaking around 1000 ms for spoken stimuli and starting around 600 ms/ peaking around 1250 ms for sung stimuli, with much higher cortical activity for singers than for actors.</p>
<fig id="pone.0147986.g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147986.g004</object-id>
<label>Fig 4</label>
<caption>
<title>Global power of minimum norm estimates of all dipoles.</title>
<p>Global power of minimum norm estimates of all dipoles, separated for the different conditions, for singers and actors, and for both spoken and sung modalities. Blue shadowed are the time windows of the analysis of the early and late activity.</p>
</caption>
<graphic xlink:href="pone.0147986.g004"></graphic>
</fig>
<p>The results of the point-wise repeated measures ANOVA revealed two time windows for both modalities, in which the experimental manipulations resulted in different brain responses. Anticipating briefly the statistical results below, during the early time window (200–500 ms), brain responses differed in predicted ways between the violations, but independent of group. In the late time window (600–1700 ms), no difference between conditions was found, but singers displayed a unexpected long-lasting and substantial level of activity. Because the topography of evoked activity was rather stable and varied only minimally within these time windows, we will present only averaged responses for these intervals.</p>
</sec>
</sec>
<sec id="sec018">
<title>Statistical analysis of the early activity</title>
<p>The pointwise repeated measures ANOVA revealed that the factors SEMANTIC VIOLATION and PROSODIC (spoken modality)/ MELODIC (sung modality) VIOLATION differed significant in the left and right temporal regions, averaged for the time interval of 200–500 ms (
<xref ref-type="fig" rid="pone.0147986.g005">Fig 5</xref>
). Because of corresponding dipole groups in the temporal areas of both hemispheres, we included the factor HEMISPHERE in the analysis of early activity. For both modalities, no significant statistical effects were detectable for the between-subject factor GROUP or for any interaction with GROUP according to the thresholds described in the methods section.</p>
<fig id="pone.0147986.g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147986.g005</object-id>
<label>Fig 5</label>
<caption>
<title>Maps and mean neural activity for respective clusters of dipoles.</title>
<p>A. Top: Mapping of the F-values for the interaction CONDITION and HEMISPHERE on a cortical surface for average time intervals, for both spoken and sung modalities. Bottom: Mean neural activity for the respective clusters of dipoles for all conditions and for both spoken and sung modalities. Error bars denote one standard deviation. B. Top: Mapping of the F-values for the main effect group on a cortical surface for average time intervals, for both spoken and sung modalities. Bottom: Mean neural activity for the respective clusters of dipoles for both groups and for both spoken and sung modalities. Error bars denote one standard deviation.</p>
</caption>
<graphic xlink:href="pone.0147986.g005"></graphic>
</fig>
<sec id="sec019">
<title>Spoken modality</title>
<p>Repeated measures ANOVAs for both temporal clusters (left temporal: 200-500ms, 22 dipoles; right temporal: 200–500 ms, 22 dipoles; with 18 corresponding dipoles) revealed significant main effects of the factor HEMISPHERE (F
<sub>(1,28)</sub>
= 14.73, p = .001), representing higher activation on the right hemisphere (21.97 ± 5.45 nAm
<sup>2</sup>
) than on the left hemisphere (17.39 ± 4.97 nAm
<sup>2</sup>
). Furthermore, there was a main effect of the mean activity of PROSODIC VIOLATION (F
<sub>(1,28)</sub>
= 32.97, p < .001) comparing prosodic violations (S+P- and S-P-: 21.99 ± 6.44 nAm
<sup>2</sup>
) to conditions without violation (S+P+ and S-P+: 17.38 ± 5.77 nAm
<sup>2</sup>
), but no effect of SEMANTIC VIOLATION. The only significant interaction SEMANTIC VIOLATION X HEMISPHERE (F
<sub>(1,28)</sub>
= 65.76, p = .007) reflected higher activity for conditions with semantic violations (mean of: S-P+ and S-P-: 18.01 ± 5.46 nAm
<sup>2</sup>
), compared to conditions without semantic violations (mean of: S+P+ and S+P-: 16.77 ± 5.48 nAm
<sup>2</sup>
) on the left hemisphere (t
<sub>(29)</sub>
= 3.35, p = .002) and the opposite pattern (mean of: S-P+ and S-P-: 21.54 ± 7.93 nAm
<sup>2</sup>
; mean of: S+P+ and S+P-: 25.86 ± 13.08 nAm
<sup>2</sup>
) on the right hemisphere (t
<sub>(29)</sub>
= -3.35, p = .003).</p>
<p>Thus, for the spoken modality, there was generally higher activity on the right hemisphere. Both hemispheres were equally involved in the processing of prosodic violations, while there was a hemispheric specialization for processing semantic violations. For the latter, the left hemisphere displayed stronger activity as a response to violated sentences than correct sentences, i.e. a typical N400 effect.</p>
</sec>
<sec id="sec020">
<title>Sung modality</title>
<p>Repeated measures ANOVAs for both temporal Clusters (left temporal: 200-500ms, 16 dipoles; right temporal: 200–500 ms, 19 dipoles; both hemispheres: 11 corresponding dipoles) revealed significant main effects of the factor HEMISPHERE (F
<sub>(1,28)</sub>
= 34.23, p < .001) representing higher activation on the right hemisphere (19.86 ± 4.43 nAm
<sup>2</sup>
) than on the left hemisphere (13.52 ± 3.77 nAm
<sup>2</sup>
). Moreover, a significantly higher level of activity was detected for expectancy-violated conditions than those without MELODIC VIOLATION (F
<sub>(1,28)</sub>
= 10.17, p = .004; S+P- and S-P-: 17.46 ± 3.24 nAm
<sup>2</sup>
vs. S+P+ and S-P+: 15.91 ± 4.13 nAm
<sup>2</sup>
) and SEMANTIC VIOLATION (F
<sub>(1,28)</sub>
= 29.17, p < .001; S-P+ and S-P-: 18.25 ± 3.42 nAm
<sup>2</sup>
vs. S+P+ and S+P-: 15.13 ± 2.81 nAm
<sup>2</sup>
). These main effects were modulated by the significant interaction SEMANTIC VIOLATION X MELODIC VIOLATION X HEMISPHERE (F
<sub>(1,28)</sub>
= 39.44, p = .014), which themselves resulted from a significant interaction of SEMANTIC VIOLATION X MEDOLDIC VIOLATION only on the right hemisphere (F
<sub>(1,29)</sub>
= 49.71, p = .010), not present on the left hemisphere (F
<sub>(1,29)</sub>
= 1.29; n.s.). Post-hoc analysis for this interaction revealed a similar level of activity for the conditions with a semantic violation (S-P+: 21.34 ± 3.19 nAm
<sup>2</sup>
, S-P-: 21.65 ± 4.32 nAm
<sup>2</sup>
), while there significant difference emerged in comparing the original line (S+P+: 16.78 ± 2.07 nAm
<sup>2</sup>
) to the condition that only contained a melodic violation (S+P-: 19.66 ± 3.11 nAm
<sup>2</sup>
; t
<sub>(29)</sub>
= -3.98, p < .001).</p>
<p>Thus, for the sung modality, there was generally more activity on the right hemisphere. Both hemispheres responded to semantic and melodic violations, while there was an additional interaction of melodic and semantic violations only in the right hemisphere. Here, stronger activity was only found in response to a melodic violation, if there was no semantic violation.</p>
</sec>
</sec>
<sec id="sec021">
<title>Statistical analysis of the late activity</title>
<p>The pointwise repeated measures ANOVA revealed, for both modalities, a significant main effect of the between-subject factor GROUP, representing higher neuronal activity for the singers than the actors localized in right temporal and left parietal regions in a late time window (
<xref ref-type="fig" rid="pone.0147986.g005">Fig 5</xref>
). There was neither a significant effect for the factors SEMANTIC VIOLATION and PROSODIC/ MELODIC VIOLATION, nor for any interaction with GROUP, according to the thresholds mentioned in the above methods section. The absence of corresponding dipole groups on the homologous hemisphere did not warrant the additional factor HEMISPHERE in the analyses.</p>
<sec id="sec022">
<title>Spoken modality</title>
<p>Repeated measures ANOVAs on time-averaged activity levels in the left parietal cluster (600-900ms, 10 dipoles) revealed a significant GROUP effect (F
<sub>(1,28)</sub>
= 4.68, p = .039), with higher activity for the singers (41.11 ± 13.2 nAm
<sup>2</sup>
) than the actors (29.64 ± 5.82 nAm
<sup>2</sup>
), independent of the stimulus condition. Additionally, in the right temporal cluster (800-1700ms, 19 dipoles), singers displayed significantly higher activity (22.56 ± 5.21 nAm
<sup>2</sup>
) than actors (16.04 ± 2.34 nAm
<sup>2</sup>
; F
<sub>(1,28)</sub>
= 11.20, p = .002).</p>
</sec>
<sec id="sec023">
<title>Sung modality</title>
<p>Repeated measures ANOVAs on time-averaged activity levels revealed a similar pattern of results to the spoken modality. In the left parietal cluster (800-1200ms, 21 dipoles), we found a significant GROUP effect (F
<sub>(1,28)</sub>
= 8.27, p = .008) with higher activity for the singers (40.94 ± 4.42 nAm
<sup>2</sup>
) than the actors (22.87 ± 0.91 nAm
<sup>2</sup>
). Additionally, for the right temporal cluster (1100-1700ms, 18 dipoles), we found significantly higher activity for the singers (21.58 ± 8.41 nAm
<sup>2</sup>
) compared to the actors (14.21 ± 2.14 nAm
<sup>2</sup>
; F
<sub>(1,28)</sub>
= 8.44, p = .007).</p>
</sec>
</sec>
</sec>
<sec sec-type="conclusions" id="sec024">
<title>Discussion</title>
<p>The aim of this study was to compare linguistic and musical processing in two groups of highly trained voice users, i.e. professional singers and actors. We employed rhyme sequences from German art songs, and presented analogous semantic and/ or melodic/prosodic violations in sung and recited versions of the material. MEG measurements were implemented to identify functional brain activity with regard to the type of expertise. Behavioral data revealed greater accuracy of pitch detection in the sung modality for singers than for actors, while there were no detectable group-specific advantages for actors, neither for the sung nor for the recited material. Although previous studies referred to dependence in the neuronal processing of linguistic and musical dimensions, both in a spoken and sung modality [
<xref rid="pone.0147986.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0147986.ref030" ref-type="bibr">30</xref>
,
<xref rid="pone.0147986.ref045" ref-type="bibr">45</xref>
], this is the first study presenting combinations of semantic and melodic/ prosodic expectancy violations for speaking and singing in a complex, but ecologically valid context. Confirming an intertwined neuronal network for music and speech, MEG data analysis disclosed condition- and modality-specific differences of “early” temporal activity (200–500 ms) on both hemispheres in homologous clusters, independent of the kind of expertise. Significant group differences appeared as “late” neuronal activity (600–1700 ms) for both stimulus modalities in right temporal and left parietal areas. We will discuss the results of the behavioural data and the two time windows in turn.</p>
<sec id="sec025">
<title>Behavioural data</title>
<p>In the behavioral data, we did not find an effect of modality-specific expertise, apart from for higher accuracy for pitch discrimination in singers for the sung modality, which can be explained with a higher sensation for musical patterns in their familiarized domain. Interestingly, while performance in terms of word accuracy in the sung and spoken modality was nearly at the same level for the different conditions, both groups performed lower for the discrimination of correct pitches in the case of a semantic violation (S-P+) in the sung modality. This might result from the need for high attention to recognize the semantic sense of the words, at the expense of the discriminatory power of pitches. Due to the different pronunciation of speech in singing compared to speaking, vowels gain stronger emphasis than consonants. This might have created particularly challenging conditions for the discrimination of closely related phonemes, such as in the rhyme words that were used to replace the original words.</p>
</sec>
<sec id="sec026">
<title>Early neuronal activity related to linguistic and musical context</title>
<p>Summarizing the similarities of the early effects for the sung and spoken modalities, high neuronal activity was measured especially after melodic/prosodic violations in predominantly right temporal areas. Consequently, it seems that neuronal networks involved in processing both modalities exhibited higher neuronal activity for the expectancy violation of the final pitch deviation of the lines compared to semantic violations. Therefore, in the present design, the rule system for syntax—melodic/ prosodic aspect—represents a global characteristic for both sung and recited phrases and indicates a global syntactical system represented bilaterally with dominance of the right hemisphere if factors involfing pitch (melody, prosody) are violated (see e.g. [
<xref rid="pone.0147986.ref046" ref-type="bibr">46</xref>
]). In line with our findings, previous studies investigating linguistic aspects of speech revealed a dominance of left temporal areas [
<xref rid="pone.0147986.ref013" ref-type="bibr">13</xref>
,
<xref rid="pone.0147986.ref015" ref-type="bibr">15</xref>
,
<xref rid="pone.0147986.ref016" ref-type="bibr">16</xref>
], especially if linguistic stimuli were presented in a complex syntactical structure. In comparison, a dominance of right temporal areas was found after violation of a musical order system such as a chord sequence or a melody line [
<xref rid="pone.0147986.ref017" ref-type="bibr">17</xref>
,
<xref rid="pone.0147986.ref019" ref-type="bibr">19</xref>
,
<xref rid="pone.0147986.ref023" ref-type="bibr">23</xref>
,
<xref rid="pone.0147986.ref047" ref-type="bibr">47</xref>
]. This was the case if linguistic and musical tasks were performed by human voice, mixed-animal or vocally similar sounds demanding high attention for frequency analysis [
<xref rid="pone.0147986.ref048" ref-type="bibr">48</xref>
,
<xref rid="pone.0147986.ref049" ref-type="bibr">49</xref>
].</p>
<p>The above-mentioned similarities during the processing of recited and sung phrases are contrasted by different interactions of effects for the sung and spoken modalities, presenting a more complex dependence of information processing for semantic and prosodic/ melodic content on both hemispheres. While there was a predominantly left-temporal lateralized activity after semantic expectancy violations for the recited sequences, which is in line with findings for the classical N400 effect [
<xref rid="pone.0147986.ref050" ref-type="bibr">50</xref>
,
<xref rid="pone.0147986.ref051" ref-type="bibr">51</xref>
], this was opposed by right-dominant temporal activity in response to melodic violations, but semantic correctness, in the sung version. The combination of the semantic content with the musical syntactical form of the melody line seems to be strongly connected in the modality of singing and represents intertwined networks reacting to different degrees after expectancy violation. These findings confirm recent research revealing a more bilateral temporal network system dependent on modality-specific aspects for sung and spoken units [
<xref rid="pone.0147986.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0147986.ref030" ref-type="bibr">30</xref>
,
<xref rid="pone.0147986.ref045" ref-type="bibr">45</xref>
]. But compared to these findings, the present study combined syntactical and semantic violations in a complete rhyme sequence both in a spoken and a sung modality. To the best of our knowledge, the only design using also original excerpts to compare different musicians and laymen (i.e. different romantic opera composers), but only presented in a sung modality, was presented by Besson, Schon and Bonnel [
<xref rid="pone.0147986.ref028" ref-type="bibr">28</xref>
,
<xref rid="pone.0147986.ref029" ref-type="bibr">29</xref>
]. In their study, the simultaneous violation of the semantic and the syntactic sense at the end of a sung line resulted in a N400 and a P600 component, suggesting that semantic and syntactic aspects of language and music were processed by independent systems, which was, however, not confirmed in subsequent investigations. In contrast, vocally generated stimuli which create simultaneously high demands on linguistic and musical aspects seem to involve middle and superior temporal areas, acting as an intertwined network. This network is adapted in a modality dependent way to different conditions [
<xref rid="pone.0147986.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0147986.ref052" ref-type="bibr">52</xref>
<xref rid="pone.0147986.ref054" ref-type="bibr">54</xref>
].</p>
</sec>
<sec id="sec027">
<title>Group differences in late neuronal activity</title>
<p>During the analysis, we detected differences in brain activation in an unexpected late and long-lasting time window (up to 1700 ms after stimulus onset), with higher activation for singers than actors. The activity was localized in right temporal areas, similar to the early activation clusters generated by the semantic and syntactic incongruencies, as well as in parietal areas on the left hemisphere, which are known to be involved in higher-order music cognition [
<xref rid="pone.0147986.ref036" ref-type="bibr">36</xref>
].</p>
<p>The specific role of temporal areas on both hemispheres concerning speech and music processing was discussed before in the interpretation of “early” activation clusters. We interpret the renewed appearance of neuronal activity in the right temporal area as a special form of working memory function. In contrast to actors, in the semi-structured interviews, singers reported repeating a heard sequence in their minds (inner rehearsal). Thus, the late right temporal activity might stem from cognitive processes representing an illusionary perception of previously perceived auditory stimuli. Studies in the field of music psychology described findings of a mental representation of music in musicians, by internally hearing sounds after the onset of the physical stimulus [
<xref rid="pone.0147986.ref037" ref-type="bibr">37</xref>
,
<xref rid="pone.0147986.ref039" ref-type="bibr">39</xref>
], without any neurophysiological evidence of this phenomenon being reported so far. Right temporal lobe activity, as a correlate of a very vivid mental representation of music, is also supported by a recent fMRI study of musical imagery of familiar tunes. This study reported a relationship of activity in right secondary auditory areas and the subjective vividness of mental imagery [
<xref rid="pone.0147986.ref055" ref-type="bibr">55</xref>
]. Because we found the higher activity in musicians for both the sung and the spoken modality, we assume that this is the result of a transfer effect after extensive training. There is some evidence for the existence of two separate working memory functions in musicians for music (nonspeech) and speech material: a phonological loop and a tonal loop. Thus, training effects of the tonal loop seem to carry over to the phonological loop, involving highly similar neural correlates [
<xref rid="pone.0147986.ref056" ref-type="bibr">56</xref>
].</p>
<p>The functional role of the inferior parietal cortex involves auditory-verbal working memory and short-term memory for musical pitch, especially on the right hemisphere of musically trained subjects [
<xref rid="pone.0147986.ref057" ref-type="bibr">57</xref>
]]. More generally, the inferior parietal cortex has been associated with the integration of sensory and motor signals for the somatosensory guidance of movements [
<xref rid="pone.0147986.ref058" ref-type="bibr">58</xref>
]. Previous neuroimaging studies have documented its response to both speech and music perception [
<xref rid="pone.0147986.ref049" ref-type="bibr">49</xref>
]. A model of speech motor control [
<xref rid="pone.0147986.ref059" ref-type="bibr">59</xref>
] posits a role of the parietal cortex (PC) in a feed forward control mechanism of articulatory motor commands. In this model, the PC acts as a control system for somatosensory feedback from the vocal tract by comparing the actual kinaesthetic feedback with the expectation of the pronounced sound. Accordingly, we assume that the increment in PC activity could reflect enhanced processing of a mismatch between intention, action and consequences and thus allows for more rapid sensorimotor adaptions/ corrections in singers than actors. Another previous study presented evidence that increased activity of receptive systems subserve the precise transformation of highly automatic speech motor sequences into appropriately adjusted motor patterns for singing [
<xref rid="pone.0147986.ref032" ref-type="bibr">32</xref>
]. Kleber and coauthors demonstrated that imagined and overt singing involves partly different brain systems in singers, with imagined singing activating a large frontal and parietal network, indicating increased involvement of higher-order cognitive processes during mental imagery [
<xref rid="pone.0147986.ref060" ref-type="bibr">60</xref>
]. These topical findings during receptive and expressive functions supplement our findings and suggest an important role of the parietal cortex for music processing in singers. Additionally, it has been shown that complex mental transformations of musical material, like the mental reversal of imagined melodies, is related to activity in the posterior parietal cortex [
<xref rid="pone.0147986.ref036" ref-type="bibr">36</xref>
]. Performing music in the mind is a technique used by professional musicians to rehearse various aspects of a musical piece, for example to mentally revise difficult parts of a previously executed musical passage. As such our unexpected findings integrate well into the existing literature, but it is yet an open question whether the phenomenon of audition and its neurophysiological counterpart comes into existence through training or whether this is prerequisite to become a professional musician.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="sec028">
<title>Conclusions</title>
<p>In conclusion, our results of early and late neuronal activation are in line with studies emphasizing a bilateral neuronal network during linguistic and musical auditory processing, which can be tuned according to the level of mental demand [
<xref rid="pone.0147986.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0147986.ref011" ref-type="bibr">11</xref>
,
<xref rid="pone.0147986.ref052" ref-type="bibr">52</xref>
,
<xref rid="pone.0147986.ref054" ref-type="bibr">54</xref>
,
<xref rid="pone.0147986.ref061" ref-type="bibr">61</xref>
]. Regarding the effect of experience, we did not find any early differences of neuronal activities evoked by a semantic and/or melodic/ prosodic violation. In contrast, rather late and long lasting time windows were characterized by strong activity in left parietal and right temporal areas. We propose that these are related to stronger mental imagery and higher-order music cognition in singers. This might constitute the effect of musical training or a prerequisite.</p>
</sec>
</body>
<back>
<ack>
<p>We would like to thank Axel Heil for helpful comments and suggestions, our participants for their time and cooperation, and Karin Berning for help with the data acquisition.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0147986.ref001">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Moreno</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Marques</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Santos</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Santos</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Castro</surname>
<given-names>SL</given-names>
</name>
,
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
.
<article-title>Musical training influences linguistic abilities in 8-year-old children: more evidence for brain plasticity</article-title>
.
<source>Cereb Cortex</source>
<year>2009</year>
<month>3</month>
;
<volume>19</volume>
(
<issue>3</issue>
):
<fpage>712</fpage>
<lpage>723</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhn120">10.1093/cercor/bhn120</ext-link>
</comment>
<pub-id pub-id-type="pmid">18832336</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref002">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Kasper</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Sammler</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Schulze</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Gunter</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
.
<article-title>Music, language and meaning: brain signatures of semantic processing</article-title>
.
<source>Nat Neurosci</source>
<year>2004</year>
<month>3</month>
;
<volume>7</volume>
(
<issue>3</issue>
):
<fpage>302</fpage>
<lpage>307</lpage>
.
<pub-id pub-id-type="pmid">14983184</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref003">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Gosselin</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Belin</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Plailly</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Tillmann</surname>
<given-names>B</given-names>
</name>
.
<article-title>Music lexical networks: the cortical organization of music recognition</article-title>
.
<source>Ann N Y Acad Sci</source>
<year>2009</year>
<month>7</month>
;
<volume>1169</volume>
:
<fpage>256</fpage>
<lpage>265</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1749-6632.2009.04557.x">10.1111/j.1749-6632.2009.04557.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">19673789</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref004">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schneider</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Schonle</surname>
<given-names>PW</given-names>
</name>
,
<name>
<surname>Altenmuller</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Munte</surname>
<given-names>TF</given-names>
</name>
.
<article-title>Using musical instruments to improve motor skill recovery following a stroke</article-title>
.
<source>J Neurol</source>
<year>2007</year>
<month>10</month>
;
<volume>254</volume>
(
<issue>10</issue>
):
<fpage>1339</fpage>
<lpage>1346</lpage>
.
<pub-id pub-id-type="pmid">17260171</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref005">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schon</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Gordon</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Campagne</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Magne</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Astesano</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Anton</surname>
<given-names>JL</given-names>
</name>
,
<etal>et al</etal>
<article-title>Similar cerebral networks in language, music and song perception</article-title>
.
<source>Neuroimage</source>
<year>2010</year>
<month>5</month>
<day>15</day>
;
<volume>51</volume>
(
<issue>1</issue>
):
<fpage>450</fpage>
<lpage>461</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2010.02.023">10.1016/j.neuroimage.2010.02.023</ext-link>
</comment>
<pub-id pub-id-type="pmid">20156575</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref006">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
.
<article-title>Neural substrates of processing syntax and semantics in music</article-title>
.
<source>Curr Opin Neurobiol</source>
<year>2005</year>
<month>4</month>
;
<volume>15</volume>
(
<issue>2</issue>
):
<fpage>207</fpage>
<lpage>212</lpage>
.
<pub-id pub-id-type="pmid">15831404</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref007">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Penhune</surname>
<given-names>V</given-names>
</name>
.
<article-title>Neuroscience and Music ("Neuromusic") III: disorders and plasticity. Preface</article-title>
.
<source>Ann N Y Acad Sci</source>
<year>2009</year>
<month>7</month>
;
<volume>1169</volume>
:
<fpage>1</fpage>
<lpage>2</lpage>
.
<pub-id pub-id-type="pmid">19673749</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref008">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kutas</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
.
<article-title>Reading senseless sentences: brain potentials reflect semantic incongruity</article-title>
.
<source>Science</source>
<year>1980</year>
<month>1</month>
<day>11</day>
;
<volume>207</volume>
(
<issue>4427</issue>
):
<fpage>203</fpage>
<lpage>205</lpage>
.
<pub-id pub-id-type="pmid">7350657</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref009">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lau</surname>
<given-names>EF</given-names>
</name>
,
<name>
<surname>Phillips</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Poeppel</surname>
<given-names>D</given-names>
</name>
.
<article-title>A cortical network for semantics: (de)constructing the N400</article-title>
.
<source>Nat Rev Neurosci</source>
<year>2008</year>
<month>12</month>
;
<volume>9</volume>
(
<issue>12</issue>
):
<fpage>920</fpage>
<lpage>933</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nrn2532">10.1038/nrn2532</ext-link>
</comment>
<pub-id pub-id-type="pmid">19020511</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref010">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Van Petten</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Luka</surname>
<given-names>BJ</given-names>
</name>
.
<article-title>Neural localization of semantic context effects in electromagnetic and hemodynamic studies</article-title>
.
<source>Brain Lang</source>
<year>2006</year>
<month>6</month>
;
<volume>97</volume>
(
<issue>3</issue>
):
<fpage>279</fpage>
<lpage>293</lpage>
.
<pub-id pub-id-type="pmid">16343606</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref011">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Maess</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Herrmann</surname>
<given-names>CS</given-names>
</name>
,
<name>
<surname>Hahne</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Nakamura</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
.
<article-title>Localizing the distributed language network responsible for the N400 measured by MEG during auditory sentence processing</article-title>
.
<source>Brain Res</source>
<year>2006</year>
<month>6</month>
<day>22</day>
;
<volume>1096</volume>
(
<issue>1</issue>
):
<fpage>163</fpage>
<lpage>172</lpage>
.
<pub-id pub-id-type="pmid">16769041</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref012">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dobel</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Junghofer</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Breitenstein</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Klauke</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Knecht</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Pantev</surname>
<given-names>C</given-names>
</name>
,
<etal>et al</etal>
<article-title>New names for known things: on the association of novel word forms with existing semantic information</article-title>
.
<source>J Cogn Neurosci</source>
<year>2010</year>
<month>6</month>
;
<volume>22</volume>
(
<issue>6</issue>
):
<fpage>1251</fpage>
<lpage>1261</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/jocn.2009.21297">10.1162/jocn.2009.21297</ext-link>
</comment>
<pub-id pub-id-type="pmid">19583468</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref013">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hirschfeld</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Zwitserlood</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Dobel</surname>
<given-names>C</given-names>
</name>
.
<article-title>Effects of language comprehension on visual processing—MEG dissociates early perceptual and late N400 effects</article-title>
.
<source>Brain Lang</source>
<year>2011</year>
<month>2</month>
;
<volume>116</volume>
(
<issue>2</issue>
):
<fpage>91</fpage>
<lpage>96</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.bandl.2010.07.002">10.1016/j.bandl.2010.07.002</ext-link>
</comment>
<pub-id pub-id-type="pmid">20708788</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref014">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Geukes</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Huster</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Wollbrink</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Junghofer</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Zwitserlood</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Dobel</surname>
<given-names>C</given-names>
</name>
.
<article-title>A large N400 but no BOLD effect—comparing source activations of semantic priming in simultaneous EEG-fMRI</article-title>
.
<source>PLOS One</source>
<year>2013</year>
<month>12</month>
<day>31</day>
;
<volume>8</volume>
(
<issue>12</issue>
):
<fpage>e84029</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0084029">10.1371/journal.pone.0084029</ext-link>
</comment>
<pub-id pub-id-type="pmid">24391871</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref015">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
,
<name>
<surname>Kotz</surname>
<given-names>SA</given-names>
</name>
.
<article-title>The brain basis of syntactic processes: functional imaging and lesion studies</article-title>
.
<source>Neuroimage</source>
<year>2003</year>
<month>11</month>
;
<volume>20</volume>
<issue>Suppl 1</issue>
:
<fpage>S8</fpage>
<lpage>17</lpage>
.
<pub-id pub-id-type="pmid">14597292</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref016">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wolters</surname>
<given-names>CH</given-names>
</name>
,
<name>
<surname>Anwander</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Maess</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Macleod</surname>
<given-names>RS</given-names>
</name>
,
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
.
<article-title>The influence of volume conduction effects on the EEG/MEG reconstruction of the sources of the Early Left Anterior Negativity</article-title>
.
<source>Conf Proc IEEE Eng Med Biol Soc</source>
<year>2004</year>
;
<volume>5</volume>
:
<fpage>3569</fpage>
<lpage>3572</lpage>
.
<pub-id pub-id-type="pmid">17271062</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref017">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Gunter</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
,
<name>
<surname>Schroger</surname>
<given-names>E</given-names>
</name>
.
<article-title>Brain indices of music processing: "nonmusicians" are musical</article-title>
.
<source>J Cogn Neurosci</source>
<year>2000</year>
<month>5</month>
;
<volume>12</volume>
(
<issue>3</issue>
):
<fpage>520</fpage>
<lpage>541</lpage>
.
<pub-id pub-id-type="pmid">10931776</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref018">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Steinbeis</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
.
<article-title>Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns</article-title>
.
<source>Cereb Cortex</source>
<year>2008</year>
<month>5</month>
;
<volume>18</volume>
(
<issue>5</issue>
):
<fpage>1169</fpage>
<lpage>1178</lpage>
.
<pub-id pub-id-type="pmid">17720685</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref019">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Maess</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Gunter</surname>
<given-names>TC</given-names>
</name>
,
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
.
<article-title>Musical syntax is processed in Broca's area: an MEG study</article-title>
.
<source>Nat Neurosci</source>
<year>2001</year>
<month>5</month>
;
<volume>4</volume>
(
<issue>5</issue>
):
<fpage>540</fpage>
<lpage>545</lpage>
.
<pub-id pub-id-type="pmid">11319564</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref020">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
.
<article-title>Toward a neural basis of music perception—a review and updated model</article-title>
.
<source>Front Psychol</source>
<year>2011</year>
;
<volume>2</volume>
:
<fpage>110</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3389/fpsyg.2011.00110">10.3389/fpsyg.2011.00110</ext-link>
</comment>
<pub-id pub-id-type="pmid">21713060</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref021">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Maess</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Gunter</surname>
<given-names>TC</given-names>
</name>
,
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
.
<article-title>Neapolitan chords activate the area of Broca. A magnetoencephalographic study</article-title>
.
<source>Ann N Y Acad Sci</source>
<year>2001</year>
<month>6</month>
;
<volume>930</volume>
:
<fpage>420</fpage>
<lpage>421</lpage>
.
<pub-id pub-id-type="pmid">11458855</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref022">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hyde</surname>
<given-names>KL</given-names>
</name>
,
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
.
<article-title>Evidence for the role of the right auditory cortex in fine pitch resolution</article-title>
.
<source>Neuropsychologia</source>
<year>2008</year>
<month>1</month>
<day>31</day>
;
<volume>46</volume>
(
<issue>2</issue>
):
<fpage>632</fpage>
<lpage>639</lpage>
.
<pub-id pub-id-type="pmid">17959204</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref023">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Warrier</surname>
<given-names>CM</given-names>
</name>
,
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
.
<article-title>Right temporal cortex is critical for utilization of melodic contextual cues in a pitch constancy task</article-title>
.
<source>Brain</source>
<year>2004</year>
<month>7</month>
;
<volume>127</volume>
(
<issue>Pt 7</issue>
):
<fpage>1616</fpage>
<lpage>1625</lpage>
.
<pub-id pub-id-type="pmid">15128620</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref024">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Patel</surname>
<given-names>AD</given-names>
</name>
,
<name>
<surname>Gibson</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Ratner</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Holcomb</surname>
<given-names>PJ</given-names>
</name>
.
<article-title>Processing syntactic relations in language and music: an event-related potential study</article-title>
.
<source>J Cogn Neurosci</source>
<year>1998</year>
<month>11</month>
;
<volume>10</volume>
(
<issue>6</issue>
):
<fpage>717</fpage>
<lpage>733</lpage>
.
<pub-id pub-id-type="pmid">9831740</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref025">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Macar</surname>
<given-names>F</given-names>
</name>
.
<article-title>An event-related potential analysis of incongruity in music and other non-linguistic contexts</article-title>
.
<source>Psychophysiology</source>
<year>1987</year>
<month>1</month>
;
<volume>24</volume>
(
<issue>1</issue>
):
<fpage>14</fpage>
<lpage>25</lpage>
.
<pub-id pub-id-type="pmid">3575590</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref026">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Faita</surname>
<given-names>F</given-names>
</name>
.
<article-title>An Event-Related Potential (ERP) Study of Musical Expectancy: Comparison of Musicians with Nonmusicians</article-title>
.
<source>Journal of Experimental Psychology: Human Perception and Performance</source>
<year>1995</year>
;
<volume>21</volume>
(
<issue>6</issue>
):
<fpage>1278</fpage>
<lpage>1296</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147986.ref027">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schon</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Magne</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
.
<article-title>The music of speech: music training facilitates pitch processing in both music and language</article-title>
.
<source>Psychophysiology</source>
<year>2004</year>
<month>5</month>
;
<volume>41</volume>
(
<issue>3</issue>
):
<fpage>341</fpage>
<lpage>349</lpage>
.
<pub-id pub-id-type="pmid">15102118</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref028">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bonnel</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Faita</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
.
<article-title>Divided attention between lyrics and tunes of operatic songs: evidence for independent processing</article-title>
.
<source>Percept Psychophys</source>
<year>2001</year>
<month>10</month>
;
<volume>63</volume>
(
<issue>7</issue>
):
<fpage>1201</fpage>
<lpage>1213</lpage>
.
<pub-id pub-id-type="pmid">11766944</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref029">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Schon</surname>
<given-names>D</given-names>
</name>
.
<article-title>Comparison between language and music</article-title>
.
<source>Ann N Y Acad Sci</source>
<year>2001</year>
<month>6</month>
;
<volume>930</volume>
:
<fpage>232</fpage>
<lpage>258</lpage>
.
<pub-id pub-id-type="pmid">11458832</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref030">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kolinsky</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Lidji</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Morais</surname>
<given-names>J</given-names>
</name>
.
<article-title>Processing interactions between phonology and melody: vowels sing but consonants speak</article-title>
.
<source>Cognition</source>
<year>2009</year>
<month>7</month>
;
<volume>112</volume>
(
<issue>1</issue>
):
<fpage>1</fpage>
<lpage>20</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.cognition.2009.02.014">10.1016/j.cognition.2009.02.014</ext-link>
</comment>
<pub-id pub-id-type="pmid">19409537</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref031">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schon</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Boyer</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Moreno</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Kolinsky</surname>
<given-names>R</given-names>
</name>
.
<article-title>Songs as an aid for language acquisition</article-title>
.
<source>Cognition</source>
<year>2008</year>
<month>2</month>
;
<volume>106</volume>
(
<issue>2</issue>
):
<fpage>975</fpage>
<lpage>983</lpage>
.
<pub-id pub-id-type="pmid">17475231</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref032">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kleber</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Veit</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Birbaumer</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Gruzelier</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Lotze</surname>
<given-names>M</given-names>
</name>
.
<article-title>The brain of opera singers: experience-dependent changes in functional activation</article-title>
.
<source>Cereb Cortex</source>
<year>2010</year>
<month>5</month>
;
<volume>20</volume>
(
<issue>5</issue>
):
<fpage>1144</fpage>
<lpage>1152</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhp177">10.1093/cercor/bhp177</ext-link>
</comment>
<pub-id pub-id-type="pmid">19692631</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref033">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Zarate</surname>
<given-names>JM</given-names>
</name>
,
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
.
<article-title>Experience-dependent neural substrates involved in vocal pitch regulation during singing</article-title>
.
<source>Neuroimage</source>
<year>2008</year>
<month>5</month>
<day>1</day>
;
<volume>40</volume>
(
<issue>4</issue>
):
<fpage>1871</fpage>
<lpage>1887</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2008.01.026">10.1016/j.neuroimage.2008.01.026</ext-link>
</comment>
<pub-id pub-id-type="pmid">18343163</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref034">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dick</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Lee</surname>
<given-names>HL</given-names>
</name>
,
<name>
<surname>Nusbaum</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Price</surname>
<given-names>CJ</given-names>
</name>
.
<article-title>Auditory-Motor Expertise Alters "Speech Selectivity" in Professional Musicians and Actors</article-title>
.
<source>Cereb Cortex</source>
<year>2010</year>
<month>9</month>
<day>9</day>
.</mixed-citation>
</ref>
<ref id="pone.0147986.ref035">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Herholz</surname>
<given-names>SC</given-names>
</name>
,
<name>
<surname>Lappe</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Knief</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Pantev</surname>
<given-names>C</given-names>
</name>
.
<article-title>Neural basis of music imagery and the effect of musical expertise</article-title>
.
<source>Eur J Neurosci</source>
<year>2008</year>
<month>12</month>
;
<volume>28</volume>
(
<issue>11</issue>
):
<fpage>2352</fpage>
<lpage>2360</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1460-9568.2008.06515.x">10.1111/j.1460-9568.2008.06515.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">19046375</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref036">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Bouffard</surname>
<given-names>M</given-names>
</name>
.
<article-title>Mental reversal of imagined melodies: a role for the posterior parietal cortex</article-title>
.
<source>J Cogn Neurosci</source>
<year>2010</year>
<month>4</month>
;
<volume>22</volume>
(
<issue>4</issue>
):
<fpage>775</fpage>
<lpage>789</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/jocn.2009.21239">10.1162/jocn.2009.21239</ext-link>
</comment>
<pub-id pub-id-type="pmid">19366283</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref037">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hubbard</surname>
<given-names>TL</given-names>
</name>
.
<article-title>Auditory imagery: empirical findings</article-title>
.
<source>Psychol Bull</source>
<year>2010</year>
<month>3</month>
;
<volume>136</volume>
(
<issue>2</issue>
):
<fpage>302</fpage>
<lpage>329</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1037/a0018436">10.1037/a0018436</ext-link>
</comment>
<pub-id pub-id-type="pmid">20192565</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref038">
<label>38</label>
<mixed-citation publication-type="book">
<name>
<surname>Gordon</surname>
<given-names>EE</given-names>
</name>
.
<chapter-title>Learing sequences in music: Skill, content and patterns</chapter-title>
<publisher-loc>Chicago</publisher-loc>
:
<publisher-name>GIA Publications</publisher-name>
<year>1993</year>
.</mixed-citation>
</ref>
<ref id="pone.0147986.ref039">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brodsky</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Rubinstein</surname>
<given-names>B</given-names>
</name>
.
<article-title>The Mental Representation of Music Notation: Notational Audiation</article-title>
.
<source>Journal of Experimental Psychology</source>
<year>2008</year>
;
<volume>34</volume>
(
<issue>2</issue>
):
<fpage>427</fpage>
<lpage>445</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1037/0096-1523.34.2.427">10.1037/0096-1523.34.2.427</ext-link>
</comment>
<pub-id pub-id-type="pmid">18377180</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref040">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hämäläinen</surname>
<given-names>MS</given-names>
</name>
,
<name>
<surname>Ilmoniemi</surname>
<given-names>RJ</given-names>
</name>
.
<article-title>Interpreting magnetic fields of the brain: Minimum-norm estimates</article-title>
.
<source>Medical & Biological Engineering & Computing</source>
<year>1994</year>
(
<issue>32</issue>
):
<fpage>35</fpage>
<lpage>42</lpage>
.
<pub-id pub-id-type="pmid">8182960</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref041">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schubert</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Mueller</surname>
<given-names>W</given-names>
</name>
.
<article-title>Die schöne Müllerin (the beautiful miller-girl), D-major,795, op.25</article-title>
.
<source>C.F.Peters</source>
<year>1823</year>
;
<day>1</day>
;
<volume>6824</volume>
:
<fpage>4</fpage>
<lpage>52</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147986.ref042">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schubert</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Mueller</surname>
<given-names>W</given-names>
</name>
.
<article-title>Winterreise (winter journey), D-major, 911, op.89</article-title>
.
<source>C.F.Peters</source>
<year>1827</year>
;
<day>1</day>
;
<volume>6824</volume>
:
<fpage>54</fpage>
<lpage>120</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147986.ref043">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Peyk</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>De Cesarei</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Junghofer</surname>
<given-names>M</given-names>
</name>
.
<article-title>ElectroMagnetoEncephalography Software: Overview and Integration with Other EEG/MEG Toolboxes</article-title>
.
<source>Comput Intell Neurosci</source>
<year>2011</year>
;
<volume>2011</volume>
:
<fpage>861705</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1155/2011/861705">10.1155/2011/861705</ext-link>
</comment>
<pub-id pub-id-type="pmid">21577273</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref044">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brockelmann</surname>
<given-names>AK</given-names>
</name>
,
<name>
<surname>Steinberg</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Elling</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Zwanzger</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Pantev</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Junghofer</surname>
<given-names>M</given-names>
</name>
.
<article-title>Emotion-associated tones attract enhanced attention at early auditory processing: magnetoencephalographic correlates</article-title>
.
<source>J Neurosci</source>
<year>2011</year>
<month>5</month>
<day>25</day>
;
<volume>31</volume>
(
<issue>21</issue>
):
<fpage>7801</fpage>
<lpage>7810</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.6236-10.2011">10.1523/JNEUROSCI.6236-10.2011</ext-link>
</comment>
<pub-id pub-id-type="pmid">21613493</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref045">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schon</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Gordon</surname>
<given-names>RL</given-names>
</name>
,
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
.
<article-title>Musical and linguistic processing in song perception</article-title>
.
<source>Ann N Y Acad Sci</source>
<year>2005</year>
<month>12</month>
;
<volume>1060</volume>
:
<fpage>71</fpage>
<lpage>81</lpage>
.
<pub-id pub-id-type="pmid">16597752</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref046">
<label>46</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kreitewolf</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
,
<name>
<surname>von Kriegstein</surname>
<given-names>K</given-names>
</name>
.
<article-title>Hemispheric lateralization of linguistic prosody recognition in comparison to speech and speaker recognition</article-title>
.
<source>Neuroimage</source>
<year>2014</year>
<month>11</month>
<day>15</day>
;
<volume>102</volume>
<issue>Pt 2</issue>
:
<fpage>332</fpage>
<lpage>344</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2014.07.038">10.1016/j.neuroimage.2014.07.038</ext-link>
</comment>
<pub-id pub-id-type="pmid">25087482</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref047">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Samson</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
.
<article-title>Contribution of the right temporal lobe to musical timbre discrimination</article-title>
.
<source>Neuropsychologia</source>
<year>1994</year>
<month>2</month>
;
<volume>32</volume>
(
<issue>2</issue>
):
<fpage>231</fpage>
<lpage>240</lpage>
.
<pub-id pub-id-type="pmid">8190246</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref048">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Belin</surname>
<given-names>P</given-names>
</name>
.
<article-title>Spectral and temporal processing in human auditory cortex</article-title>
.
<source>Cereb Cortex</source>
<year>2001</year>
<month>10</month>
;
<volume>11</volume>
(
<issue>10</issue>
):
<fpage>946</fpage>
<lpage>953</lpage>
.
<pub-id pub-id-type="pmid">11549617</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref049">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Belin</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Penhune</surname>
<given-names>VB</given-names>
</name>
.
<article-title>Structure and function of auditory cortex: music and speech</article-title>
.
<source>Trends Cogn Sci</source>
<year>2002</year>
<month>1</month>
<day>1</day>
;
<volume>6</volume>
(
<issue>1</issue>
):
<fpage>37</fpage>
<lpage>46</lpage>
.
<pub-id pub-id-type="pmid">11849614</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref050">
<label>50</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hagoort</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Brown</surname>
<given-names>CM</given-names>
</name>
.
<article-title>ERP effects of listening to speech: semantic ERP effects</article-title>
.
<source>Neuropsychologia</source>
<year>2000</year>
;
<volume>38</volume>
(
<issue>11</issue>
):
<fpage>1518</fpage>
<lpage>1530</lpage>
.
<pub-id pub-id-type="pmid">10906377</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref051">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>Keuper</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Zwanzger</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Nordt</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Eden</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Laeger</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Zwitserlood</surname>
<given-names>P</given-names>
</name>
,
<etal>et al</etal>
<article-title>How 'love' and 'hate' differ from 'sleep': Using combined electro/magnetoencephalographic data to reveal the sources of early cortical responses to emotional words</article-title>
.
<source>Hum Brain Mapp</source>
<year>2012</year>
<month>12</month>
<day>26</day>
.</mixed-citation>
</ref>
<ref id="pone.0147986.ref052">
<label>52</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sammler</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Ball</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Brandt</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Elger</surname>
<given-names>CE</given-names>
</name>
,
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
,
<etal>et al</etal>
<article-title>Overlap of musical and linguistic syntax processing: intracranial ERP evidence</article-title>
.
<source>Ann N Y Acad Sci</source>
<year>2009</year>
<month>7</month>
;
<volume>1169</volume>
:
<fpage>494</fpage>
<lpage>498</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1749-6632.2009.04792.x">10.1111/j.1749-6632.2009.04792.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">19673829</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref053">
<label>53</label>
<mixed-citation publication-type="journal">
<name>
<surname>Warren</surname>
<given-names>JD</given-names>
</name>
,
<name>
<surname>Scott</surname>
<given-names>SK</given-names>
</name>
,
<name>
<surname>Price</surname>
<given-names>CJ</given-names>
</name>
,
<name>
<surname>Griffiths</surname>
<given-names>TD</given-names>
</name>
.
<article-title>Human brain mechanisms for the early analysis of voices</article-title>
.
<source>Neuroimage</source>
<year>2006</year>
<month>7</month>
<day>1</day>
;
<volume>31</volume>
(
<issue>3</issue>
):
<fpage>1389</fpage>
<lpage>1397</lpage>
.
<pub-id pub-id-type="pmid">16540351</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref054">
<label>54</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sammler</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Ball</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Brandt</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Grigutsch</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Huppertz</surname>
<given-names>HJ</given-names>
</name>
,
<etal>et al</etal>
Co
<article-title>-localizing linguistic and musical syntax with intracranial EEG</article-title>
.
<source>Neuroimage</source>
<year>2013</year>
<month>1</month>
<day>1</day>
;
<volume>64</volume>
:
<fpage>134</fpage>
<lpage>146</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2012.09.035">10.1016/j.neuroimage.2012.09.035</ext-link>
</comment>
<pub-id pub-id-type="pmid">23000255</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref055">
<label>55</label>
<mixed-citation publication-type="journal">
<name>
<surname>Herholz</surname>
<given-names>SC</given-names>
</name>
,
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
.
<article-title>Neuronal correlates of perception, imagery, and memory for familiar tunes</article-title>
.
<source>J Cogn Neurosci</source>
<year>2012</year>
<month>6</month>
;
<volume>24</volume>
(
<issue>6</issue>
):
<fpage>1382</fpage>
<lpage>1397</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/jocn_a_00216">10.1162/jocn_a_00216</ext-link>
</comment>
<pub-id pub-id-type="pmid">22360595</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref056">
<label>56</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schulze</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Zysset</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Mueller</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
,
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
.
<article-title>Neuroarchitecture of verbal and tonal working memory in nonmusicians and musicians</article-title>
.
<source>Hum Brain Mapp</source>
<year>2010</year>
<month>6</month>
<day>9</day>
.</mixed-citation>
</ref>
<ref id="pone.0147986.ref057">
<label>57</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gaab</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Schlaug</surname>
<given-names>G</given-names>
</name>
.
<article-title>Musicians differ from nonmusicians in brain activation despite performance matching</article-title>
.
<source>Ann N Y Acad Sci</source>
<year>2003</year>
<month>11</month>
;
<volume>999</volume>
:
<fpage>385</fpage>
<lpage>388</lpage>
.
<pub-id pub-id-type="pmid">14681161</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref058">
<label>58</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jancke</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Kleinschmidt</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Mirzazade</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Shah</surname>
<given-names>NJ</given-names>
</name>
,
<name>
<surname>Freund</surname>
<given-names>HJ</given-names>
</name>
.
<article-title>The role of the inferior parietal cortex in linking the tactile perception and manual construction of object shapes</article-title>
.
<source>Cereb Cortex</source>
<year>2001</year>
<month>2</month>
;
<volume>11</volume>
(
<issue>2</issue>
):
<fpage>114</fpage>
<lpage>121</lpage>
.
<pub-id pub-id-type="pmid">11208666</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref059">
<label>59</label>
<mixed-citation publication-type="journal">
<name>
<surname>Guenther</surname>
<given-names>FH</given-names>
</name>
.
<article-title>Cortical interactions underlying the production of speech sounds</article-title>
.
<source>J Commun Disord</source>
<year>2006</year>
<season>Sep-Oct</season>
;
<volume>39</volume>
(
<issue>5</issue>
):
<fpage>350</fpage>
<lpage>365</lpage>
.
<pub-id pub-id-type="pmid">16887139</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref060">
<label>60</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kleber</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Birbaumer</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Veit</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Trevorrow</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Lotze</surname>
<given-names>M</given-names>
</name>
.
<article-title>Overt and imagined singing of an Italian aria</article-title>
.
<source>Neuroimage</source>
<year>2007</year>
<month>7</month>
<day>1</day>
;
<volume>36</volume>
(
<issue>3</issue>
):
<fpage>889</fpage>
<lpage>900</lpage>
.
<pub-id pub-id-type="pmid">17478107</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147986.ref061">
<label>61</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gordon</surname>
<given-names>RL</given-names>
</name>
,
<name>
<surname>Schon</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Magne</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Astesano</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
.
<article-title>Words and melody are intertwined in perception of sung words: EEG and behavioral evidence</article-title>
.
<source>PLOS One</source>
<year>2010</year>
<month>3</month>
<day>31</day>
;
<volume>5</volume>
(
<issue>3</issue>
):
<fpage>e9889</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0009889">10.1371/journal.pone.0009889</ext-link>
</comment>
<pub-id pub-id-type="pmid">20360991</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
</country>
<region>
<li>District de Cologne</li>
<li>Rhénanie-du-Nord-Westphalie</li>
</region>
<settlement>
<li>Bonn</li>
</settlement>
</list>
<tree>
<country name="Allemagne">
<noRegion>
<name sortKey="Rosslau, Ken" sort="Rosslau, Ken" uniqKey="Rosslau K" first="Ken" last="Rosslau">Ken Rosslau</name>
</noRegion>
<name sortKey="Deuster, Dirk" sort="Deuster, Dirk" uniqKey="Deuster D" first="Dirk" last="Deuster">Dirk Deuster</name>
<name sortKey="Dobel, Christian" sort="Dobel, Christian" uniqKey="Dobel C" first="Christian" last="Dobel">Christian Dobel</name>
<name sortKey="Dobel, Christian" sort="Dobel, Christian" uniqKey="Dobel C" first="Christian" last="Dobel">Christian Dobel</name>
<name sortKey="Herholz, Sibylle C" sort="Herholz, Sibylle C" uniqKey="Herholz S" first="Sibylle C." last="Herholz">Sibylle C. Herholz</name>
<name sortKey="Herholz, Sibylle C" sort="Herholz, Sibylle C" uniqKey="Herholz S" first="Sibylle C." last="Herholz">Sibylle C. Herholz</name>
<name sortKey="Knief, Arne" sort="Knief, Arne" uniqKey="Knief A" first="Arne" last="Knief">Arne Knief</name>
<name sortKey="Ortmann, Magdalene" sort="Ortmann, Magdalene" uniqKey="Ortmann M" first="Magdalene" last="Ortmann">Magdalene Ortmann</name>
<name sortKey="Ortmann, Magdalene" sort="Ortmann, Magdalene" uniqKey="Ortmann M" first="Magdalene" last="Ortmann">Magdalene Ortmann</name>
<name sortKey="Pantev, Christo" sort="Pantev, Christo" uniqKey="Pantev C" first="Christo" last="Pantev">Christo Pantev</name>
<name sortKey="Schmidt, Claus Michael" sort="Schmidt, Claus Michael" uniqKey="Schmidt C" first="Claus-Michael" last="Schmidt">Claus-Michael Schmidt</name>
<name sortKey="Zehnhoff Dinnesen, Antoinetteam" sort="Zehnhoff Dinnesen, Antoinetteam" uniqKey="Zehnhoff Dinnesen A" first="Antoinetteam" last="Zehnhoff-Dinnesen">Antoinetteam Zehnhoff-Dinnesen</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/OperaV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000003 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000003 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    OperaV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4749173
   |texte=   Song Perception by Professional Singers and Actors: An MEG Study
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:26863437" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a OperaV1 

Wicri

This area was generated with Dilib version V0.6.21.
Data generation: Thu Apr 14 14:59:05 2016. Site generation: Thu Jan 4 23:09:23 2024