Serveur d'exploration sur l'opéra

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Words and Melody Are Intertwined in Perception of Sung Words: EEG and Behavioral Evidence

Identifieur interne : 000688 ( Ncbi/Merge ); précédent : 000687; suivant : 000689

Words and Melody Are Intertwined in Perception of Sung Words: EEG and Behavioral Evidence

Auteurs : Reyna L. Gordon [États-Unis, France] ; Daniele Schön [France] ; Cyrille Magne [États-Unis] ; Corine Astésano [France] ; Mireille Besson [France]

Source :

RBID : PMC:2847603

Abstract

Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.


Url:
DOI: 10.1371/journal.pone.0009889
PubMed: 20360991
PubMed Central: 2847603

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:2847603

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Words and Melody Are Intertwined in Perception of Sung Words: EEG and Behavioral Evidence</title>
<author>
<name sortKey="Gordon, Reyna L" sort="Gordon, Reyna L" uniqKey="Gordon R" first="Reyna L." last="Gordon">Reyna L. Gordon</name>
<affiliation wicri:level="2">
<nlm:aff id="aff1">
<addr-line>Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida</wicri:regionArea>
<placeName>
<region type="state">Floride</region>
</placeName>
</affiliation>
<affiliation wicri:level="4">
<nlm:aff id="aff2">
<addr-line>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille</wicri:regionArea>
<placeName>
<settlement type="city">Marseille</settlement>
</placeName>
<orgName type="university">Université de la Méditerranée</orgName>
<placeName>
<settlement type="city">Marseille</settlement>
<region type="region" nuts="2">Provence-Alpes-Côte d'Azur</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Schon, Daniele" sort="Schon, Daniele" uniqKey="Schon D" first="Daniele" last="Schön">Daniele Schön</name>
<affiliation wicri:level="4">
<nlm:aff id="aff2">
<addr-line>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille</wicri:regionArea>
<placeName>
<settlement type="city">Marseille</settlement>
</placeName>
<orgName type="university">Université de la Méditerranée</orgName>
<placeName>
<settlement type="city">Marseille</settlement>
<region type="region" nuts="2">Provence-Alpes-Côte d'Azur</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Magne, Cyrille" sort="Magne, Cyrille" uniqKey="Magne C" first="Cyrille" last="Magne">Cyrille Magne</name>
<affiliation wicri:level="2">
<nlm:aff id="aff3">
<addr-line>Department of Psychology, Middle Tennessee State University, Murfreesboro, Tennessee, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychology, Middle Tennessee State University, Murfreesboro, Tennessee</wicri:regionArea>
<placeName>
<region type="state">Tennessee</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Astesano, Corine" sort="Astesano, Corine" uniqKey="Astesano C" first="Corine" last="Astésano">Corine Astésano</name>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<addr-line>U.R.I. Octogone-Lordat 4156, Université de Toulouse II, Toulouse, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>U.R.I. Octogone-Lordat 4156, Université de Toulouse II, Toulouse</wicri:regionArea>
<placeName>
<settlement type="city">Toulouse</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="4">
<nlm:aff id="aff2">
<addr-line>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille</wicri:regionArea>
<placeName>
<settlement type="city">Marseille</settlement>
</placeName>
<orgName type="university">Université de la Méditerranée</orgName>
<placeName>
<settlement type="city">Marseille</settlement>
<region type="region" nuts="2">Provence-Alpes-Côte d'Azur</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Besson, Mireille" sort="Besson, Mireille" uniqKey="Besson M" first="Mireille" last="Besson">Mireille Besson</name>
<affiliation wicri:level="4">
<nlm:aff id="aff2">
<addr-line>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille</wicri:regionArea>
<placeName>
<settlement type="city">Marseille</settlement>
</placeName>
<orgName type="university">Université de la Méditerranée</orgName>
<placeName>
<settlement type="city">Marseille</settlement>
<region type="region" nuts="2">Provence-Alpes-Côte d'Azur</region>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">20360991</idno>
<idno type="pmc">2847603</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2847603</idno>
<idno type="RBID">PMC:2847603</idno>
<idno type="doi">10.1371/journal.pone.0009889</idno>
<date when="2010">2010</date>
<idno type="wicri:Area/Pmc/Corpus">000E70</idno>
<idno type="wicri:Area/Pmc/Curation">000E70</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000300</idno>
<idno type="wicri:Area/Ncbi/Merge">000688</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Words and Melody Are Intertwined in Perception of Sung Words: EEG and Behavioral Evidence</title>
<author>
<name sortKey="Gordon, Reyna L" sort="Gordon, Reyna L" uniqKey="Gordon R" first="Reyna L." last="Gordon">Reyna L. Gordon</name>
<affiliation wicri:level="2">
<nlm:aff id="aff1">
<addr-line>Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida</wicri:regionArea>
<placeName>
<region type="state">Floride</region>
</placeName>
</affiliation>
<affiliation wicri:level="4">
<nlm:aff id="aff2">
<addr-line>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille</wicri:regionArea>
<placeName>
<settlement type="city">Marseille</settlement>
</placeName>
<orgName type="university">Université de la Méditerranée</orgName>
<placeName>
<settlement type="city">Marseille</settlement>
<region type="region" nuts="2">Provence-Alpes-Côte d'Azur</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Schon, Daniele" sort="Schon, Daniele" uniqKey="Schon D" first="Daniele" last="Schön">Daniele Schön</name>
<affiliation wicri:level="4">
<nlm:aff id="aff2">
<addr-line>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille</wicri:regionArea>
<placeName>
<settlement type="city">Marseille</settlement>
</placeName>
<orgName type="university">Université de la Méditerranée</orgName>
<placeName>
<settlement type="city">Marseille</settlement>
<region type="region" nuts="2">Provence-Alpes-Côte d'Azur</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Magne, Cyrille" sort="Magne, Cyrille" uniqKey="Magne C" first="Cyrille" last="Magne">Cyrille Magne</name>
<affiliation wicri:level="2">
<nlm:aff id="aff3">
<addr-line>Department of Psychology, Middle Tennessee State University, Murfreesboro, Tennessee, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychology, Middle Tennessee State University, Murfreesboro, Tennessee</wicri:regionArea>
<placeName>
<region type="state">Tennessee</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Astesano, Corine" sort="Astesano, Corine" uniqKey="Astesano C" first="Corine" last="Astésano">Corine Astésano</name>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<addr-line>U.R.I. Octogone-Lordat 4156, Université de Toulouse II, Toulouse, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>U.R.I. Octogone-Lordat 4156, Université de Toulouse II, Toulouse</wicri:regionArea>
<placeName>
<settlement type="city">Toulouse</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="4">
<nlm:aff id="aff2">
<addr-line>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille</wicri:regionArea>
<placeName>
<settlement type="city">Marseille</settlement>
</placeName>
<orgName type="university">Université de la Méditerranée</orgName>
<placeName>
<settlement type="city">Marseille</settlement>
<region type="region" nuts="2">Provence-Alpes-Côte d'Azur</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Besson, Mireille" sort="Besson, Mireille" uniqKey="Besson M" first="Mireille" last="Besson">Mireille Besson</name>
<affiliation wicri:level="4">
<nlm:aff id="aff2">
<addr-line>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille</wicri:regionArea>
<placeName>
<settlement type="city">Marseille</settlement>
</placeName>
<orgName type="university">Université de la Méditerranée</orgName>
<placeName>
<settlement type="city">Marseille</settlement>
<region type="region" nuts="2">Provence-Alpes-Côte d'Azur</region>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="e-ISSN">1932-6203</idno>
<imprint>
<date when="2010">2010</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schön</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, Ad" uniqKey="Patel A">AD Patel</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Coltheart, M" uniqKey="Coltheart M">M Coltheart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, Ad" uniqKey="Patel A">AD Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Kolinsky, R" uniqKey="Kolinsky R">R Kolinsky</name>
</author>
<author>
<name sortKey="Tramo, M" uniqKey="Tramo M">M Tramo</name>
</author>
<author>
<name sortKey="Labrecque, R" uniqKey="Labrecque R">R Labrecque</name>
</author>
<author>
<name sortKey="Hublet, C" uniqKey="Hublet C">C Hublet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hebert, S" uniqKey="Hebert S">S Hébert</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Racette, A" uniqKey="Racette A">A Racette</name>
</author>
<author>
<name sortKey="Bard, C" uniqKey="Bard C">C Bard</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schmithorst, Vj" uniqKey="Schmithorst V">VJ Schmithorst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, Ad" uniqKey="Patel A">AD Patel</name>
</author>
<author>
<name sortKey="Gibson, E" uniqKey="Gibson E">E Gibson</name>
</author>
<author>
<name sortKey="Ratner, J" uniqKey="Ratner J">J Ratner</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Holcomb, Pj" uniqKey="Holcomb P">PJ Holcomb</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maess, B" uniqKey="Maess B">B Maess</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
<author>
<name sortKey="Gunter, Tc" uniqKey="Gunter T">TC Gunter</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levitin, Dj" uniqKey="Levitin D">DJ Levitin</name>
</author>
<author>
<name sortKey="Menon, V" uniqKey="Menon V">V Menon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gelfand, Jr" uniqKey="Gelfand J">JR Gelfand</name>
</author>
<author>
<name sortKey="Bookheimer, Sy" uniqKey="Bookheimer S">SY Bookheimer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G Hickok</name>
</author>
<author>
<name sortKey="Buchsbaum, B" uniqKey="Buchsbaum B">B Buchsbaum</name>
</author>
<author>
<name sortKey="Humphries, C" uniqKey="Humphries C">C Humphries</name>
</author>
<author>
<name sortKey="Muftuler, T" uniqKey="Muftuler T">T Muftuler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
<author>
<name sortKey="Kasper, E" uniqKey="Kasper E">E Kasper</name>
</author>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D Sammler</name>
</author>
<author>
<name sortKey="Schulze, K" uniqKey="Schulze K">K Schulze</name>
</author>
<author>
<name sortKey="Gunter, T" uniqKey="Gunter T">T Gunter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Steinbeis, N" uniqKey="Steinbeis N">N Steinbeis</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Steinbeis, N" uniqKey="Steinbeis N">N Steinbeis</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frey, A" uniqKey="Frey A">A Frey</name>
</author>
<author>
<name sortKey="Marie, C" uniqKey="Marie C">C Marie</name>
</author>
<author>
<name sortKey="Prod Homme, L" uniqKey="Prod Homme L">L Prod'Homme</name>
</author>
<author>
<name sortKey="Timsit Berthier, M" uniqKey="Timsit Berthier M">M Timsit-Berthier</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schön</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schön</name>
</author>
<author>
<name sortKey="Ystad, S" uniqKey="Ystad S">S Ystad</name>
</author>
<author>
<name sortKey="Kronland Martinet, R" uniqKey="Kronland Martinet R">R Kronland-Martinet</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Daltrozzo, J" uniqKey="Daltrozzo J">J Daltrozzo</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schön</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gordon, Rl" uniqKey="Gordon R">RL Gordon</name>
</author>
<author>
<name sortKey="Racette, A" uniqKey="Racette A">A Racette</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schön</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brown, S" uniqKey="Brown S">S Brown</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mithen, Sj" uniqKey="Mithen S">SJ Mithen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A" uniqKey="Patel A">A Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nakata, T" uniqKey="Nakata T">T Nakata</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De L Etoile, Sk" uniqKey="De L Etoile S">SK de l'Etoile</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bartholomeus, B" uniqKey="Bartholomeus B">B Bartholomeus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goodglass, H" uniqKey="Goodglass H">H Goodglass</name>
</author>
<author>
<name sortKey="Calderon, M" uniqKey="Calderon M">M Calderon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Serafine, Ml" uniqKey="Serafine M">ML Serafine</name>
</author>
<author>
<name sortKey="Crowder, Rg" uniqKey="Crowder R">RG Crowder</name>
</author>
<author>
<name sortKey="Repp, Bh" uniqKey="Repp B">BH Repp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Radeau, M" uniqKey="Radeau M">M Radeau</name>
</author>
<author>
<name sortKey="Arguin, M" uniqKey="Arguin M">M Arguin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wallace, Wt" uniqKey="Wallace W">WT Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rainey, Dw" uniqKey="Rainey D">DW Rainey</name>
</author>
<author>
<name sortKey="Larsen, Jd" uniqKey="Larsen J">JD Larsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kilgour, Ar" uniqKey="Kilgour A">AR Kilgour</name>
</author>
<author>
<name sortKey="Jakobson, Ls" uniqKey="Jakobson L">LS Jakobson</name>
</author>
<author>
<name sortKey="Cuddy, Ll" uniqKey="Cuddy L">LL Cuddy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schön</name>
</author>
<author>
<name sortKey="Boyer, M" uniqKey="Boyer M">M Boyer</name>
</author>
<author>
<name sortKey="Moreno, S" uniqKey="Moreno S">S Moreno</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thiessen, Ed" uniqKey="Thiessen E">ED Thiessen</name>
</author>
<author>
<name sortKey="Saffran, Jr" uniqKey="Saffran J">JR Saffran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kone Ni, Vj" uniqKey="Kone Ni V">VJ Konečni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stratton, Vn" uniqKey="Stratton V">VN Stratton</name>
</author>
<author>
<name sortKey="Zalanowski, Ah" uniqKey="Zalanowski A">AH Zalanowski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ali, So" uniqKey="Ali S">SO Ali</name>
</author>
<author>
<name sortKey="Peynircio Lu, Zf" uniqKey="Peynircio Lu Z">ZF Peynircioğlu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Faita, F" uniqKey="Faita F">F Faïta</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Bonnel, A M" uniqKey="Bonnel A">A-M Bonnel</name>
</author>
<author>
<name sortKey="Requin, J" uniqKey="Requin J">J Requin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bonnel, Am" uniqKey="Bonnel A">AM Bonnel</name>
</author>
<author>
<name sortKey="Faita, F" uniqKey="Faita F">F Faïta</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Besouw, Rm" uniqKey="Van Besouw R">RM van Besouw</name>
</author>
<author>
<name sortKey="Howard, Dm" uniqKey="Howard D">DM Howard</name>
</author>
<author>
<name sortKey="Ternstrom, S" uniqKey="Ternstrom S">S Ternstrom</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kolinsky, R" uniqKey="Kolinsky R">R Kolinsky</name>
</author>
<author>
<name sortKey="Lidji, P" uniqKey="Lidji P">P Lidji</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Morais, J" uniqKey="Morais J">J Morais</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E Bigand</name>
</author>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B Tillmann</name>
</author>
<author>
<name sortKey="Poulin, B" uniqKey="Poulin B">B Poulin</name>
</author>
<author>
<name sortKey="D Adamo, Da" uniqKey="D Adamo D">DA D'Adamo</name>
</author>
<author>
<name sortKey="Madurell, F" uniqKey="Madurell F">F Madurell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, Wf" uniqKey="Thompson W">WF Thompson</name>
</author>
<author>
<name sortKey="Russo, Fa" uniqKey="Russo F">FA Russo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Poulin Charronnat, B" uniqKey="Poulin Charronnat B">B Poulin-Charronnat</name>
</author>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E Bigand</name>
</author>
<author>
<name sortKey="Madurell, F" uniqKey="Madurell F">F Madurell</name>
</author>
<author>
<name sortKey="Peereman, R" uniqKey="Peereman R">R Peereman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fedorenko, E" uniqKey="Fedorenko E">E Fedorenko</name>
</author>
<author>
<name sortKey="Patel, A" uniqKey="Patel A">A Patel</name>
</author>
<author>
<name sortKey="Casasanto, D" uniqKey="Casasanto D">D Casasanto</name>
</author>
<author>
<name sortKey="Winawer, J" uniqKey="Winawer J">J Winawer</name>
</author>
<author>
<name sortKey="Gibson, E" uniqKey="Gibson E">E Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Garner, Wr" uniqKey="Garner W">WR Garner</name>
</author>
<author>
<name sortKey="Felfoldy, Gl" uniqKey="Felfoldy G">GL Felfoldy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lidji, P" uniqKey="Lidji P">P Lidji</name>
</author>
<author>
<name sortKey="Jolicoeur, P" uniqKey="Jolicoeur P">P Jolicoeur</name>
</author>
<author>
<name sortKey="Moreau, P" uniqKey="Moreau P">P Moreau</name>
</author>
<author>
<name sortKey="Kolinsky, R" uniqKey="Kolinsky R">R Kolinsky</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levy, Da" uniqKey="Levy D">DA Levy</name>
</author>
<author>
<name sortKey="Granot, R" uniqKey="Granot R">R Granot</name>
</author>
<author>
<name sortKey="Bentin, S" uniqKey="Bentin S">S Bentin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levy, Da" uniqKey="Levy D">DA Levy</name>
</author>
<author>
<name sortKey="Granot, R" uniqKey="Granot R">R Granot</name>
</author>
<author>
<name sortKey="Bentin, S" uniqKey="Bentin S">S Bentin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E Bigand</name>
</author>
<author>
<name sortKey="Poulin Charronnat, B" uniqKey="Poulin Charronnat B">B Poulin-Charronnat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bentin, S" uniqKey="Bentin S">S Bentin</name>
</author>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M Kutas</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Holcomb, Pj" uniqKey="Holcomb P">PJ Holcomb</name>
</author>
<author>
<name sortKey="Neville, Hj" uniqKey="Neville H">HJ Neville</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M Kutas</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M Kutas</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mccallum, Wc" uniqKey="Mccallum W">WC McCallum</name>
</author>
<author>
<name sortKey="Farmer, Sf" uniqKey="Farmer S">SF Farmer</name>
</author>
<author>
<name sortKey="Pocock, Pv" uniqKey="Pocock P">PV Pocock</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Van Petten, Cv" uniqKey="Van Petten C">CV van Petten</name>
</author>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M Kutas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meyer, De" uniqKey="Meyer D">DE Meyer</name>
</author>
<author>
<name sortKey="Schvaneveldt, Rw" uniqKey="Schvaneveldt R">RW Schvaneveldt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neely, Jh" uniqKey="Neely J">JH Neely</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Macar, F" uniqKey="Macar F">F Macar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Faita, F" uniqKey="Faita F">F Faïta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Paller, Ka" uniqKey="Paller K">KA Paller</name>
</author>
<author>
<name sortKey="Mccarthy, G" uniqKey="Mccarthy G">G McCarthy</name>
</author>
<author>
<name sortKey="Wood, Cc" uniqKey="Wood C">CC Wood</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Verleger, R" uniqKey="Verleger R">R Verleger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B Tillmann</name>
</author>
<author>
<name sortKey="Janata, P" uniqKey="Janata P">P Janata</name>
</author>
<author>
<name sortKey="Bharucha, Jj" uniqKey="Bharucha J">JJ Bharucha</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miranda, Ra" uniqKey="Miranda R">RA Miranda</name>
</author>
<author>
<name sortKey="Ullman, Mt" uniqKey="Ullman M">MT Ullman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pachella, Rg" uniqKey="Pachella R">RG Pachella</name>
</author>
<author>
<name sortKey="Miller, Jo" uniqKey="Miller J">JO Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gregg, Mk" uniqKey="Gregg M">MK Gregg</name>
</author>
<author>
<name sortKey="Samuel, Ag" uniqKey="Samuel A">AG Samuel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thomas, Rd" uniqKey="Thomas R">RD Thomas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Astesano, C" uniqKey="Astesano C">C Astésano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nguyen, N" uniqKey="Nguyen N">N Nguyen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mccarthy, G" uniqKey="Mccarthy G">G McCarthy</name>
</author>
<author>
<name sortKey="Wood, Cc" uniqKey="Wood C">CC Wood</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Urbach, Tp" uniqKey="Urbach T">TP Urbach</name>
</author>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M Kutas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perrin, F" uniqKey="Perrin F">F Perrin</name>
</author>
<author>
<name sortKey="Garcia Larrea, L" uniqKey="Garcia Larrea L">L García-Larrea</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Relander, K" uniqKey="Relander K">K Relander</name>
</author>
<author>
<name sortKey="R M, P" uniqKey="R M P">P Rämä</name>
</author>
<author>
<name sortKey="Kujala, T" uniqKey="Kujala T">T Kujala</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Astesano, C" uniqKey="Astesano C">C Astésano</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
<author>
<name sortKey="Alter, K" uniqKey="Alter K">K Alter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hohlfeld, A" uniqKey="Hohlfeld A">A Hohlfeld</name>
</author>
<author>
<name sortKey="Sommer, W" uniqKey="Sommer W">W Sommer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Magne, C" uniqKey="Magne C">C Magne</name>
</author>
<author>
<name sortKey="Astesano, C" uniqKey="Astesano C">C Astésano</name>
</author>
<author>
<name sortKey="Aramaki, M" uniqKey="Aramaki M">M Aramaki</name>
</author>
<author>
<name sortKey="Ystad, S" uniqKey="Ystad S">S Ystad</name>
</author>
<author>
<name sortKey="Kronland Martinet, R" uniqKey="Kronland Martinet R">R Kronland-Martinet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Iba Ez, A" uniqKey="Iba Ez A">A Ibáñez</name>
</author>
<author>
<name sortKey="L Pez, V" uniqKey="L Pez V">V López</name>
</author>
<author>
<name sortKey="Cornejo, C" uniqKey="Cornejo C">C Cornejo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Digeser, Fm" uniqKey="Digeser F">FM Digeser</name>
</author>
<author>
<name sortKey="Wohlberedt, T" uniqKey="Wohlberedt T">T Wohlberedt</name>
</author>
<author>
<name sortKey="Hoppe, U" uniqKey="Hoppe U">U Hoppe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nguyen, N" uniqKey="Nguyen N">N Nguyen</name>
</author>
<author>
<name sortKey="Fagyal, Z" uniqKey="Fagyal Z">Z Fagyal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hagoort, P" uniqKey="Hagoort P">P Hagoort</name>
</author>
<author>
<name sortKey="Brown, Cm" uniqKey="Brown C">CM Brown</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Petten, C" uniqKey="Van Petten C">C Van Petten</name>
</author>
<author>
<name sortKey="Coulson, S" uniqKey="Coulson S">S Coulson</name>
</author>
<author>
<name sortKey="Rubin, S" uniqKey="Rubin S">S Rubin</name>
</author>
<author>
<name sortKey="Plante, E" uniqKey="Plante E">E Plante</name>
</author>
<author>
<name sortKey="Parks, M" uniqKey="Parks M">M Parks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Janata, P" uniqKey="Janata P">P Janata</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M Kutas</name>
</author>
<author>
<name sortKey="Mccarthy, G" uniqKey="Mccarthy G">G McCarthy</name>
</author>
<author>
<name sortKey="Donchin, E" uniqKey="Donchin E">E Donchin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnson, R" uniqKey="Johnson R">R Johnson</name>
</author>
<author>
<name sortKey="Donchin, E" uniqKey="Donchin E">E Donchin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Donchin, E" uniqKey="Donchin E">E Donchin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Polich, J" uniqKey="Polich J">J Polich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carrion, Re" uniqKey="Carrion R">RE Carrion</name>
</author>
<author>
<name sortKey="Bly, Bm" uniqKey="Bly B">BM Bly</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cunningham, Wa" uniqKey="Cunningham W">WA Cunningham</name>
</author>
<author>
<name sortKey="Espinet, Sd" uniqKey="Espinet S">SD Espinet</name>
</author>
<author>
<name sortKey="Deyoung, Cg" uniqKey="Deyoung C">CG DeYoung</name>
</author>
<author>
<name sortKey="Zelazo, Pd" uniqKey="Zelazo P">PD Zelazo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pastor, Mc" uniqKey="Pastor M">MC Pastor</name>
</author>
<author>
<name sortKey="Bradley, Mm" uniqKey="Bradley M">MM Bradley</name>
</author>
<author>
<name sortKey="Low, A" uniqKey="Low A">A Low</name>
</author>
<author>
<name sortKey="Versace, F" uniqKey="Versace F">F Versace</name>
</author>
<author>
<name sortKey="Molto, J" uniqKey="Molto J">J Molto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spreckelmeyer, Kn" uniqKey="Spreckelmeyer K">KN Spreckelmeyer</name>
</author>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M Kutas</name>
</author>
<author>
<name sortKey="Urbach, Tp" uniqKey="Urbach T">TP Urbach</name>
</author>
<author>
<name sortKey="Altenmuller, E" uniqKey="Altenmuller E">E Altenmüller</name>
</author>
<author>
<name sortKey="Munte, Tf" uniqKey="Munte T">TF Münte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Serafine, Ml" uniqKey="Serafine M">ML Serafine</name>
</author>
<author>
<name sortKey="Davidson, J" uniqKey="Davidson J">J Davidson</name>
</author>
<author>
<name sortKey="Crowder, Rg" uniqKey="Crowder R">RG Crowder</name>
</author>
<author>
<name sortKey="Repp, Bh" uniqKey="Repp B">BH Repp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mietz, A" uniqKey="Mietz A">A Mietz</name>
</author>
<author>
<name sortKey="Toepel, U" uniqKey="Toepel U">U Toepel</name>
</author>
<author>
<name sortKey="Ischebeck, A" uniqKey="Ischebeck A">A Ischebeck</name>
</author>
<author>
<name sortKey="Alter, K" uniqKey="Alter K">K Alter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schmidt Kassow, M" uniqKey="Schmidt Kassow M">M Schmidt-Kassow</name>
</author>
<author>
<name sortKey="Kotz, Sa" uniqKey="Kotz S">SA Kotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lau, E" uniqKey="Lau E">E Lau</name>
</author>
<author>
<name sortKey="Almeida, D" uniqKey="Almeida D">D Almeida</name>
</author>
<author>
<name sortKey="Hines, Pc" uniqKey="Hines P">PC Hines</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M Kutas</name>
</author>
<author>
<name sortKey="Federmeier, Kd" uniqKey="Federmeier K">KD Federmeier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aramaki, M" uniqKey="Aramaki M">M Aramaki</name>
</author>
<author>
<name sortKey="Marie, C" uniqKey="Marie C">C Marie</name>
</author>
<author>
<name sortKey="Kronland Martinet, R" uniqKey="Kronland Martinet R">R Kronland-Martinet</name>
</author>
<author>
<name sortKey="Ystad, S" uniqKey="Ystad S">S Ystad</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Slevc, Lr" uniqKey="Slevc L">LR Slevc</name>
</author>
<author>
<name sortKey="Rosenberg, Jc" uniqKey="Rosenberg J">JC Rosenberg</name>
</author>
<author>
<name sortKey="Patel, Ad" uniqKey="Patel A">AD Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D Schön</name>
</author>
<author>
<name sortKey="Gordon, R" uniqKey="Gordon R">R Gordon</name>
</author>
<author>
<name sortKey="Campagne, A" uniqKey="Campagne A">A Campagne</name>
</author>
<author>
<name sortKey="Magne, C" uniqKey="Magne C">C Magne</name>
</author>
<author>
<name sortKey="Astesano, C" uniqKey="Astesano C">C Astésano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dissanayake, E" uniqKey="Dissanayake E">E Dissanayake</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bergeson, Tr" uniqKey="Bergeson T">TR Bergeson</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Norton, A" uniqKey="Norton A">A Norton</name>
</author>
<author>
<name sortKey="Zipse, L" uniqKey="Zipse L">L Zipse</name>
</author>
<author>
<name sortKey="Marchina, S" uniqKey="Marchina S">S Marchina</name>
</author>
<author>
<name sortKey="Schlaug, G" uniqKey="Schlaug G">G Schlaug</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">20360991</article-id>
<article-id pub-id-type="pmc">2847603</article-id>
<article-id pub-id-type="publisher-id">09-PONE-RA-12518R2</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0009889</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline">
<subject>Neuroscience/Cognitive Neuroscience</subject>
<subject>Neuroscience/Psychology</subject>
<subject>Neuroscience/Experimental Psychology</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Words and Melody Are Intertwined in Perception of Sung Words: EEG and Behavioral Evidence</article-title>
<alt-title alt-title-type="running-head">Melody Modulates N400 in Song</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Gordon</surname>
<given-names>Reyna L.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Schön</surname>
<given-names>Daniele</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Magne</surname>
<given-names>Cyrille</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Astésano</surname>
<given-names>Corine</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Besson</surname>
<given-names>Mireille</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida, United States of America</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée, Marseille, France</addr-line>
</aff>
<aff id="aff3">
<label>3</label>
<addr-line>Department of Psychology, Middle Tennessee State University, Murfreesboro, Tennessee, United States of America</addr-line>
</aff>
<aff id="aff4">
<label>4</label>
<addr-line>U.R.I. Octogone-Lordat 4156, Université de Toulouse II, Toulouse, France</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Rodriguez-Fornells</surname>
<given-names>Antoni</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">University of Barcelona, Spain</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>reyna.gordon@alumni.usc.edu</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: RLG DS CM CA MB. Performed the experiments: RLG CA. Analyzed the data: RLG DS. Contributed reagents/materials/analysis tools: RLG DS CM CA. Wrote the paper: RLG DS CM CA MB.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2010</year>
</pub-date>
<pub-date pub-type="epub">
<day>31</day>
<month>3</month>
<year>2010</year>
</pub-date>
<volume>5</volume>
<issue>3</issue>
<elocation-id>e9889</elocation-id>
<history>
<date date-type="received">
<day>19</day>
<month>8</month>
<year>2009</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>2</month>
<year>2010</year>
</date>
</history>
<permissions>
<copyright-statement>Gordon et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
</permissions>
<abstract>
<p>Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.</p>
</abstract>
<counts>
<page-count count="12"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Strong arguments have been made for both the opposing frameworks of modularity versus shared resources underlying language and music cognition (see reviews
<xref ref-type="bibr" rid="pone.0009889-Besson1">[1]</xref>
<xref ref-type="bibr" rid="pone.0009889-Patel2">[5]</xref>
). On the one hand, double dissociations of linguistic and musical processes, documented in neuropsychological case studies, often point to domain-specific and separate neural substrates for language and music
<xref ref-type="bibr" rid="pone.0009889-Peretz1">[3]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Peretz2">[6]</xref>
<xref ref-type="bibr" rid="pone.0009889-Schmithorst1">[9]</xref>
. On the other hand, results of brain imaging and behavioral studies have often demonstrated shared or similar resources underlying, for instance, syntactic and harmonic processing
<xref ref-type="bibr" rid="pone.0009889-Patel3">[10]</xref>
<xref ref-type="bibr" rid="pone.0009889-Gelfand1">[14]</xref>
, auditory working memory for both linguistic and musical stimuli
<xref ref-type="bibr" rid="pone.0009889-Hickok1">[15]</xref>
, and semantic or semiotic priming
<xref ref-type="bibr" rid="pone.0009889-Koelsch3">[16]</xref>
<xref ref-type="bibr" rid="pone.0009889-Daltrozzo1">[21]</xref>
.</p>
<p>These conflicting results may stem from the use of different methods, but also from other methodological problems. The main disadvantage to comparing language and music processing by testing perception of speech and musical excerpts is that the acoustic properties, context, and secondary associations (e.g., musical style or linguistic pragmatics) between even the most carefully controlled stimuli may vary greatly between the two domains. One ecological alternative is to study the perception of song
<xref ref-type="bibr" rid="pone.0009889-Gordon1">[22]</xref>
. In this case, linguistic and musical information are contained in one auditory signal that is also a universal form of human vocal expression. Furthermore, a better understanding of the neural basis of song is surely germane to the ongoing debate on the evolutionary origins of language and music, especially in view of propositions that the protolanguage used by early humans was characterized by singing
<xref ref-type="bibr" rid="pone.0009889-Brown1">[23]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Mithen1">[24]</xref>
and that vocal learning was a key feature governing the evolution of musical and linguistic rhythm
<xref ref-type="bibr" rid="pone.0009889-Patel4">[25]</xref>
. While most studies of music cognition have used non-vocal music stimuli, everyday music-making and listening usually involve singing. Moreover, from a developmental perspective, singing is also quite relevant for parent-infant bonding, as indicated by studies showing that babies prefer infant-directed singing to infant-directed speech
<xref ref-type="bibr" rid="pone.0009889-Nakata1">[26]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-delEtoile1">[27]</xref>
.</p>
<p>Early studies of song cognition used dichotic listening paradigms to reveal lateralization patterns of left-ear (right hemisphere) advantage for melody recognition and right ear (left hemisphere) advantage for phoneme recognition in song
<xref ref-type="bibr" rid="pone.0009889-Bartholomeus1">[28]</xref>
and in the recall of musical and linguistic content of sung digits
<xref ref-type="bibr" rid="pone.0009889-Goodglass1">[29]</xref>
. Despite the lateralization tendencies, melody and lyrics appear to be tightly integrated in recognition
<xref ref-type="bibr" rid="pone.0009889-Serafine1">[30]</xref>
and priming experiments
<xref ref-type="bibr" rid="pone.0009889-Peretz3">[31]</xref>
. Indeed, the melody of a song may facilitate learning and recall of the words
<xref ref-type="bibr" rid="pone.0009889-Wallace1">[32]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Rainey1">[33]</xref>
, though this advantage appears to be diminished when the rate of presentation is controlled for, such that spoken lyrics are presented at the same rate as sung ones
<xref ref-type="bibr" rid="pone.0009889-Kilgour1">[34]</xref>
. Furthermore, the segmentation of a pseudo-language into relevant units is facilitated for sung compared to spoken pseudowords
<xref ref-type="bibr" rid="pone.0009889-Schn2">[35]</xref>
, and infants learn words more easily when sung on melodies rather than when spoken
<xref ref-type="bibr" rid="pone.0009889-Thiessen1">[36]</xref>
.</p>
<p>The extent to which semantics and emotions are conveyed by song lyrics remains a controversial issue. One study showed that when participants were asked to listen to songs from a variety of popular music genres, they performed only at chance level when attempting to interpret the singer's intended message of each song
<xref ref-type="bibr" rid="pone.0009889-Koneni1">[37]</xref>
. Thus, while explicit literary interpretations of song lyrics do not appear consistent in this study, other work has suggested that sung lyrics have a greater influence over listeners' mood than the same melody played on an instrument
<xref ref-type="bibr" rid="pone.0009889-Stratton1">[38]</xref>
. However, this effect was amplified when the lyrics were sung with piano accompaniment, showing that the musical dimension retains importance. It has also been reported that lyrics intensify emotional responses to sad and angry music, yet mitigate the response to happy and calm music
<xref ref-type="bibr" rid="pone.0009889-Ali1">[39]</xref>
.</p>
<p>A key feature of several recent studies is the use of attentional focus to examine the interaction or independence of words and melodies in song, either by directing listeners' attention to language and music simultaneously
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
<xref ref-type="bibr" rid="pone.0009889-vanBesouw1">[42]</xref>
, or to language only
<xref ref-type="bibr" rid="pone.0009889-Bonnel1">[41]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Kolinsky1">[43]</xref>
<xref ref-type="bibr" rid="pone.0009889-Fedorenko1">[47]</xref>
, or to music only
<xref ref-type="bibr" rid="pone.0009889-Bonnel1">[41]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Kolinsky1">[43]</xref>
. Some of these studies have demonstrated interactive effects between the linguistic and musical dimensions of song, thereby suggesting that common cognitive processes and neural resources are engaged to process language and music. Bigand et al.
<xref ref-type="bibr" rid="pone.0009889-Bigand1">[44]</xref>
showed that a subtle variation in harmonic processing interfered with phoneme monitoring in the perception of choral music sung with pseudowords. In a follow-up study, the authors used a lexical decision task on sung sentence material to demonstrate that harmonic processing also interfered with semantic priming
<xref ref-type="bibr" rid="pone.0009889-PoulinCharronnat1">[46]</xref>
. These observed interactions between semantics and harmony, measured through the implicit processing of the musical dimension, suggest that language and music in song are perceptually interwoven. Interestingly, data recently obtained by Kolinsky et al.
<xref ref-type="bibr" rid="pone.0009889-Kolinsky1">[43]</xref>
using a Garner paradigm
<xref ref-type="bibr" rid="pone.0009889-Garner1">[48]</xref>
provides evidence that, while consonants remain separable from melody, vowels and melody are strongly integrated in song perception. This interaction may stem from integration of vowel and musical pitch in initial stages of sensory processing
<xref ref-type="bibr" rid="pone.0009889-Lidji1">[49]</xref>
. Sung sentences were also used by Fedorenko et al.
<xref ref-type="bibr" rid="pone.0009889-Fedorenko1">[47]</xref>
to demonstrate that the processing of syntactically complex sentences in language is modulated by structural manipulations in music, thereby indicating that structural aspects of language and music seem to be integrated in song perception.</p>
<p>By contrast, other studies of song perception and memory have shown evidence for independent processing of the linguistic and musical dimensions of song. Besson et al.
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
used the Event-Related brain Potential (ERP) method to study the relationship between words and melodies in the perception of opera excerpts sung without instrumental accompaniment. When musicians were asked to passively listen to the opera excerpts and pay equal attention to lyrics and tunes, results showed distinct ERP components for semantic (N400) and harmonic (P300) violations. Furthermore, the observed effects were well accounted for by an additive model of semantic and harmonic processing (i.e., results in the double violation condition were not significantly different from the sum of the simple semantic and melodic violations). Additional behavioral evidence for the independence of semantics and harmony in song was provided by a second experiment utilizing the same stimuli
<xref ref-type="bibr" rid="pone.0009889-Bonnel1">[41]</xref>
and a dual task paradigm. When musician and non-musician listeners had to detect semantic and/or harmonic violations in song, results showed that regardless of musical expertise, there was no decrease in performance when listeners simultaneously attended language and music, compared to attending only one dimension at a time. These results contrast with those recently obtained by van Besouw et al.
<xref ref-type="bibr" rid="pone.0009889-vanBesouw1">[42]</xref>
, showing a detriment to performance in recalling pitch contour and recalling words when listeners had to simultaneously pay attention to the words and pitch in song, as well as a similar detriment when they were asked to pay attention to the words and pitch contour of speech. Singing was also used innovatively in a series of experiments by Levy et al.
<xref ref-type="bibr" rid="pone.0009889-Levy1">[50]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Levy2">[51]</xref>
that highlighted the influence of task demands and attentional focus on the perception of human voices in a non-linguistic context; the oddball paradigm generated a task-dependent positive ERP component (P320) in response to sung tones compared to instrumental tones.</p>
<p>The present study was developed to further investigate the interaction or independence of the linguistic and musical dimensions by examining the electrophysiological and behavioral correlates of words and melody in the perception of songs by individuals without formal musical training (and who are thus most representative of the general population). The choice to test non-musician participants was motivated by compelling evidence reviewed by Bigand & Poulin-Charronnat
<xref ref-type="bibr" rid="pone.0009889-Bigand2">[52]</xref>
, in support of the idea that day-to-day normal exposure to music teaches non-musicians to implicitly process the structural aspects of music according to similar principles (although less explicitly) as individuals who have received extensive musical training. Results obtained with behavioral measures on non-musician participants demonstrate that pseudowords and intervals are processed interactively in song perception, regardless of whether listeners attend to the linguistic or to the musical dimensions
<xref ref-type="bibr" rid="pone.0009889-Kolinsky1">[43]</xref>
. Our goal was to determine whether the interactions between lyrics and tunes would also be observed when the linguistic and musical complexity of the sung stimuli was increased by using real words sung on short melodies.</p>
<p>The specific aim of the present experiment was two-fold: to determine the nature of the relationship (independent or interactive) between the linguistic and musical dimensions of sung words, and to specify how attention influences the dynamics of that relationship. To achieve these goals, we presented listeners with prime-target pairs of tri-syllabic words sung on 3-note melodies and recorded behavioral and electrophysiological data while they performed a same/different task. Compared to the prime, the melody and words of the sung target was manipulated orthogonally to create four experimental conditions: Same Word/Same Melody (W = M = ); Same Word/Different Melody (W = M≠); Different Word/Same Melody (W≠M = ); Different Word/Different Melody (W≠M≠; see
<xref ref-type="fig" rid="pone-0009889-g001">Figure 1</xref>
for examples).</p>
<fig id="pone-0009889-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009889.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Stimuli examples.</title>
<p>Examples of stimuli in the four experimental conditions: same word, same melody (a); same word, different melody (b); different word, same melody (c); different word, different melody (d).</p>
</caption>
<graphic xlink:href="pone.0009889.g001"></graphic>
</fig>
<p>On the basis of previous findings that the N400 component is elicited by semantically unexpected or unrelated words in pairs of words
<xref ref-type="bibr" rid="pone.0009889-Bentin1">[53]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Holcomb1">[54]</xref>
, read and spoken sentences
<xref ref-type="bibr" rid="pone.0009889-Kutas1">[55]</xref>
<xref ref-type="bibr" rid="pone.0009889-McCallum1">[57]</xref>
, and sung sentences
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
, and from results showing decreased N400 amplitude with repetition
<xref ref-type="bibr" rid="pone.0009889-Besson3">[58]</xref>
, we predicted that different targets, semantically unrelated to the prime (W≠), would elicit larger N400 components, slower Reaction Times (RTs) and higher error rates than same, repeated targets (W = )
<xref ref-type="bibr" rid="pone.0009889-Meyer1">[59]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Neely1">[60]</xref>
.</p>
<p>Besson et al.
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
also showed that an opera excerpt ending on an incongruous pitch evoked a positive component, P300/P600, typically associated with surprising events such as melodic incongruities
<xref ref-type="bibr" rid="pone.0009889-Besson4">[61]</xref>
<xref ref-type="bibr" rid="pone.0009889-Verleger1">[64]</xref>
. Thus, we predicted that different melodies (M≠) would also elicit larger P300/P600 components, and slower RTs and higher error rates
<xref ref-type="bibr" rid="pone.0009889-Tillmann1">[65]</xref>
, compared to same melody (M = ).</p>
<p>Finally, if the perception of words and melodies in songs call upon independent processes, the Word effect (different – same word) should be similar, in behavioral measures and N400 amplitude, for same and different melodies. Likewise, the Melody effect (different – same melody) should be similar, in behavioral measures and P300/P600 amplitude, for same and different words. If the perception of words and melodies in sung words rely instead on interactive processes, the Word effect should be different for same and different melodies (interference effects) and vice-versa for the Melody effect. In addition, the use of an orthogonal design allows us to test the additive model following which the ERP in the double variations condition (W≠M≠) should be equivalent to the sum of the ERPs in the simple variations conditions (W≠M =  plus W = M≠).</p>
<p>In order to determine how attention to one dimension or another modulates the processing of words and melody in song, we asked participants to perform a same/different task on the same set of stimuli and to focus their attention either on the linguistic dimension (Linguistic Task: are target words same or different as prime words?) or on the musical dimension (Musical Task: are target melodies same or different as prime melodies?). The same-different task has been used extensively in the literature to investigate the relationship between two dimensions of a stimulus in various modalities (e.g., melody recognition
<xref ref-type="bibr" rid="pone.0009889-Miranda1">[66]</xref>
; letter recognition
<xref ref-type="bibr" rid="pone.0009889-Pachella1">[67]</xref>
; meaningful environmental sounds
<xref ref-type="bibr" rid="pone.0009889-Gregg1">[68]</xref>
), and is particularly effective when participants are asked to attend to only one dimension at a time (see Thomas
<xref ref-type="bibr" rid="pone.0009889-Thomas1">[69]</xref>
for a review and in-depth analysis of the same-different task).</p>
</sec>
<sec id="s2" sec-type="methods">
<title>Methods</title>
<sec id="s2a">
<title>A. Participants</title>
<p>Twenty-one volunteers (15 females; mean age = 25 years old; age range 18–32) were paid 16 euros to participate in this experiment that lasted for about 90 minutes including preparation time. Informed consent was obtained from all participants, and the data was analyzed anonymously. Verbal consent was used because at the time of data collection, the local ethics committee did not require written consent for experiments using behavioral or ERP methods in healthy adult individuals. This study was approved by the CNRS - Mediterranean Institute for Cognitive Neuroscience and was conducted in accordance with local norms and guidelines for the protection of human subjects. All participants had normal hearing, no known neurological problems, and were native French-speaking, right-handed non-musicians (all had less than two years of formal music lessons).</p>
</sec>
<sec id="s2b">
<title>B. Stimuli</title>
<p>We created a set of 480 different pairs of stimuli (primes and targets). First, a list of 120 pairs of French tri-syllabic nouns was established. In each pair, the prime and target words were different and semantically unrelated. The phonological and phonetic characteristics of the words were controlled and we limited the use of certain phonemes with intrinsically longer durations (e.g. fricatives
<xref ref-type="bibr" rid="pone.0009889-Astsano1">[70]</xref>
), as well as consonant clusters, so that syllabic duration would be as consistent as possible between words. To increase task difficulty and to homogenize the linguistic and musical dimensions, the first syllable and the first note of the prime and target within a pair were always the same.</p>
<p>Next, 120 pairs of different 3-note isochronous melodies were created while controlling the harmonic content and using all 12 keys. All intervals up to the major sixth were used except the tritone. The melodic contour was also balanced across the stimuli. One quarter of the melodic pairs (30 melodies) consisted of a prime with rising contour (defined as two successive ascending intervals) paired with a target with falling contour (defined as two successive descending intervals) and vice versa for another ¼ of the pairs. The other half of the pairs consisted of “complex” contours: ¼ of the pairs had a prime made up of an ascending interval plus a descending interval, followed by a target with a descending plus an ascending interval, and vice-versa for the last ¼ of the pairs. These different types of contours were evenly distributed among the experimental conditions. No melody was used more than three times, and any melody appearing more than once was always transposed into a different key and paired with a different prime melody. The melodies were written in a vocal range that was comfortable for the singer.</p>
<p>Finally, the pairs of melodies were randomly assigned to the pairs of words. Once the 120 different pairs had been created, they were distributed evenly over the four experimental conditions: W = M = ; W = M≠; W≠M =  and W≠M≠ with 30 trials per condition (see
<xref ref-type="fig" rid="pone-0009889-g001">Figure 1</xref>
and supporting materials
<xref ref-type="supplementary-material" rid="pone.0009889.s001">Audio S1</xref>
,
<xref ref-type="supplementary-material" rid="pone.0009889.s002">Audio S2</xref>
,
<xref ref-type="supplementary-material" rid="pone.0009889.s003">Audio S3</xref>
,
<xref ref-type="supplementary-material" rid="pone.0009889.s004">Audio S4</xref>
for stimulus examples, and the
<xref ref-type="supplementary-material" rid="pone.0009889.s005">Appendix S1</xref>
for a list of stimuli used). In order to control for specific stimulus effects, 4 lists were constructed so that each target appeared in all 4 conditions across the 4 lists (Latin square design).</p>
<p>The 120 targets and 480 primes were sung
<italic>a capella</italic>
by a baritone. Recording sessions took place in an anechoic room. In order to prevent listeners from making judgments based solely on lower-level acoustic cues, two different utterances of the sung words were selected to constitute the pairs in the W = M =  conditions (in natural speech/song no two pronunciations of a segment by the same speaker are ever identical, but listeners normalize over perceived segments
<xref ref-type="bibr" rid="pone.0009889-Nguyen1">[71]</xref>
). Although the singer sung at a tempo of 240 beats per minute to control syllable duration, natural syllabic lengthening always occurred on the last syllable/note, giving rise to an average duration of all stimuli of 913 ms (SD = 54 ms). All words were normalized in intensity to 66 dB (SD across items = 1 dB).</p>
</sec>
<sec id="s2c">
<title>C. Procedure</title>
<p>Participants listened, through headphones, to 120 pairs of sung words from the four experimental conditions presented in pseudorandom order. The same pairs were presented twice in two attentional tasks: Linguistic and Musical. In the Linguistic task, participants were instructed to pay attention only to the language in order to decide, by pressing one of two response keys as quickly and accurately as possible, if the two words were the same or different. In the Musical Task, participants were instructed to pay attention only to the music in order to decide, as quickly and accurately as possible, if the two melodies were the same or different.</p>
<p>Each session began with a block of practice trials. Each trial consisted of a prime sung word followed by a target sung word, with an SOA of 1800 ms. Participants were asked to avoid blinking until a series of X's appeared on the computer screen at the end of each trial. Response keys, order of tasks, and stimuli lists were counterbalanced across participants. The software Presentation (Neurobehavioral Systems, Albany, CA) was used to present stimuli and record behavioral responses (RTs and % errors).</p>
</sec>
<sec id="s2d">
<title>D. Data acquisition</title>
<p>EEG was recorded continuously from 32 “active” (pre-amplified) Ag-AgCl scalp electrodes (Biosemi, Amsterdam) and located according to the International 10/20 system. The data were re-referenced offline to the algebraic average of the left and right mastoids. In order to detect eye movements and blinks, the horizontal electrooculogram (EOG) was recorded from electrodes placed 1 cm to the left and right of the external canthi, and the vertical EOG was recorded from an electrode beneath the right eye. The EEG and EOG signals were digitized at 512 Hz and were filtered with a bandpass of 0.1–40 Hz (post-analysis data were filtered with a lowpass of 10 Hz for visualization purposes only). Data were later segmented in single trials of 2200 ms starting 200 ms (baseline) before target onset. Trials containing ocular or movement artifacts or amplifier saturation (determined by visual inspection) were excluded from the averaged ERP waveforms (i.e., on average 12% of the trials, thereby leaving approximately 26 out of a possible 30 trials in each condition per participant). Individual data analysis and grand averages were computed using the Brain Vision Analyzer software (Brain Products, Munich).</p>
</sec>
<sec id="s2e">
<title>E. Data Analyses</title>
<p>Behavioral data (RTs and arcsin-transformed Error Rates) were analyzed using a three-way ANOVA with within-subject factors: Attentional Task (Linguistic vs. Musical), Word (same vs. different), and Melody (same vs. different). A four-way ANOVA with factors Task Order, Attentional Task, Word, and Melody was computed to determine if results were influenced by the order in which participants performed the two tasks: Linguistic task first or Musical Task first. Although a main effect of Order was found, showing that the second task (whether Linguistic or Musical) was performed better than the first task (thereby reflecting increased familiarity with the experimental procedure), no significant interactions of Order with other factors were found, so this factor was not considered further.</p>
<p>Mean amplitude ERPs to the target words were measured in several latency bands (50–150, 150–300, 300–500, 600–800, 800–1000 ms) determined both from visual inspection and from results of consecutive analyses of 50-ms latency windows from 0 to 2000 ms. Eight regions of interest were defined by first separating the electrodes into two groups: midlines (8) and laterals (24), and then defining subsets of electrodes for analysis. The midlines were divided into two regions of interest: fronto-central: (Fz, FC1, FC2, Cz) and parieto-occipital (CP1, CP2, Pz, Oz). The lateral electrodes were separated into 6 regions of interest: left frontal (FP1, AF3, F3, F7), left temporal (FC5, T7, CP5, C3), left parietal (P3, P7, PO3, O1), right frontal (FP2, AF4, F4, F8), right temporal (FC6, T8, CP6, C4) and right parietal (P4, P8, PO4, O2). For the midline electrodes, an ANOVA with factors Attentional Task (Linguistic vs. Musical), Word (same vs. different), Melody (same vs. different) and Region (fronto-central vs. parieto-occipital) was computed on the mean amplitudes of the ERPs in each latency band. A similar ANOVA was computed for the lateral electrodes, with Attentional Task, Word, Melody, Hemisphere (left vs. right) and Region (frontal vs. temporal vs. parietal) as factors. Results of the ANOVAs are reported only when significant at p<0.05. All p values for ERP results were adjusted with the Greenhouse-Geisser epsilon correction for nonsphericity when necessary. For both behavioral and ERP results, when interactions between two or more factors were significant, pairwise post-hoc comparisons between relevant condition pairs were computed and thresholded by Bonferroni correction. When post-hoc analysis revealed that none of the simple effects constituting an interaction reached the threshold for Bonferroni significance, the interaction was not considered further.</p>
</sec>
</sec>
<sec id="s3">
<title>Results</title>
<sec id="s3a">
<title>Behavioral data</title>
<p>Mean Reaction times and Error rates are reported in
<xref ref-type="table" rid="pone-0009889-t001">Table 1</xref>
.</p>
<table-wrap id="pone-0009889-t001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009889.t001</object-id>
<label>Table 1</label>
<caption>
<title>Behavioral data.</title>
</caption>
<alternatives>
<graphic id="pone-0009889-t001-1" xlink:href="pone.0009889.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Linguistic Task</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Musical Task</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Condition</td>
<td align="left" rowspan="1" colspan="1">W =  M = </td>
<td align="left" rowspan="1" colspan="1">W =  M≠</td>
<td align="left" rowspan="1" colspan="1">W≠ M = </td>
<td align="left" rowspan="1" colspan="1">W≠ M≠</td>
<td align="left" rowspan="1" colspan="1">W =  M = </td>
<td align="left" rowspan="1" colspan="1">W =  M≠</td>
<td align="left" rowspan="1" colspan="1">W≠ M = </td>
<td align="left" rowspan="1" colspan="1">W≠ M≠</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">RTs</td>
<td align="left" rowspan="1" colspan="1">718 (151)</td>
<td align="left" rowspan="1" colspan="1">756 (162)</td>
<td align="left" rowspan="1" colspan="1">786 (131)</td>
<td align="left" rowspan="1" colspan="1">783 (153)</td>
<td align="left" rowspan="1" colspan="1">919 (168)</td>
<td align="left" rowspan="1" colspan="1">1003 (153)</td>
<td align="left" rowspan="1" colspan="1">1129 (221)</td>
<td align="left" rowspan="1" colspan="1">1109 (255)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">% Err</td>
<td align="left" rowspan="1" colspan="1">0.8 (1.5)</td>
<td align="left" rowspan="1" colspan="1">0.6 (1.3)</td>
<td align="left" rowspan="1" colspan="1">1.0 (2.1)</td>
<td align="left" rowspan="1" colspan="1">1.1 (1.9)</td>
<td align="left" rowspan="1" colspan="1">0.8 (1.8)</td>
<td align="left" rowspan="1" colspan="1">3.7 (5.3)</td>
<td align="left" rowspan="1" colspan="1">3.5 (4.5)</td>
<td align="left" rowspan="1" colspan="1">8.9 (9.1)</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>Mean Reaction Times (RTs) and errors rates (in %) for each of the 4 experimental conditions (W = M = : same word, same melody; W = M≠: same word, different melody; W≠M = : different word, same melody; W≠M≠: different word, different melody), in the Linguistic and Musical tasks. SD is indicated in parentheses.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>The ANOVA on RTs showed that participants were slower in the Musical Task (1040 ms) than in the Linguistic Task (761 ms; main effect of Task [
<italic>F</italic>
(1,20) = 72.26, p<0.001]). Moreover, RTs were slower for W≠ (952 ms) than W =  (849 ms; main effect of Word [
<italic>F</italic>
(1,20) = 88.46, p<0.001]). Finally, the Task x Word interaction was significant [
<italic>F</italic>
(1,20) = 22.76, p<0.001]: in the Musical Task participants were slower for W≠ (1119 ms) than for W =  (961 ms; simple effect of Word: posthoc p<0.001) but this difference was not significant in the Linguistic Task. The Task x Melody interaction was not significant but the Word x Melody interaction was significant [
<italic>F</italic>
(1,20) = 18.44, p<0.001]: RTs were slower for M≠ (879 ms) than for M =  (818 ms) only when words were same (W = ; posthoc p<0.001). By contrast, RTs were slower for W≠ than for W =  regardless of whether melodies were same (M = ) or different (M≠, both posthoc p's<0.001).</p>
<p>The ANOVA on Error rates showed that participants made more errors in the Musical Task (4.21%) than in the Linguistic Task (0.87%) [main effect of Task:
<italic>F</italic>
(1,20) = 20.95, p<0.001]. Moreover, both the Task x Word and the Task x Melody interactions were significant [F(1,20) = 9.53, p = 0.006 and F(1,20) = 9.21, p = 0.006, respectively]. In the Musical Task participants made more errors for W≠ (6.19%) than for W =  (2.22%; simple effect of Word: posthoc p<0.001) and for M≠ (6.27%) than for M =  (2.14%; simple effect of Melody: posthoc p<0.001), but these differences were not significant in the Linguistic Task. The Word x Melody interaction was not significant.</p>
</sec>
<sec id="s3b">
<title>ERP data</title>
<p>Results of the ANOVAs on ERP data in the different latency ranges are presented in
<xref ref-type="table" rid="pone-0009889-t002">Table 2</xref>
. When the main effects or relevant interactions were significant, results of pairwise posthoc comparisons are reported in the text (except for posthoc results of the Word by Melody interaction, which are reported in
<xref ref-type="table" rid="pone-0009889-t003">Table 3</xref>
). The Word effect and the Melody effect in each task are illustrated on
<xref ref-type="fig" rid="pone-0009889-g002">Figures 2</xref>
and
<xref ref-type="fig" rid="pone-0009889-g003">3</xref>
, respectively.</p>
<fig id="pone-0009889-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009889.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Word effect.</title>
<p>Grand average ERPs timelocked to the onset of targets with the same word as the prime (solid line) or a different word than the prime (dashed line), in the Linguistic Task (A) and Musical Task (B). Selected traces from 9 electrodes are presented. In this figure, amplitude (in microvolts) is plotted on the ordinate (negative up) and the time (in milliseconds) is on the abscissa.</p>
</caption>
<graphic xlink:href="pone.0009889.g002"></graphic>
</fig>
<fig id="pone-0009889-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009889.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Melody effect.</title>
<p>Grand average ERPs timelocked to the onset of targets with the same melody as the prime (solid line) or a different melody than the prime (dashed line), in the Linguistic Task (A) and Musical Task (B). Selected traces from 9 electrodes are presented. In this figure, amplitude (in microvolts) is plotted on the ordinate (negative up) and the time (in milliseconds) is on the abscissa.</p>
</caption>
<graphic xlink:href="pone.0009889.g003"></graphic>
</fig>
<table-wrap id="pone-0009889-t002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009889.t002</object-id>
<label>Table 2</label>
<caption>
<title>ANOVA results on mean amplitudes of ERPs.</title>
</caption>
<alternatives>
<graphic id="pone-0009889-t002-2" xlink:href="pone.0009889.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Latency (ms)</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">50–150</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">150–300</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">300–500</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">600–800</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">800–1000</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Factors</td>
<td align="left" rowspan="1" colspan="1">df</td>
<td align="left" rowspan="1" colspan="1">F</td>
<td align="left" rowspan="1" colspan="1">p</td>
<td align="left" rowspan="1" colspan="1">F</td>
<td align="left" rowspan="1" colspan="1">p</td>
<td align="left" rowspan="1" colspan="1">F</td>
<td align="left" rowspan="1" colspan="1">p</td>
<td align="left" rowspan="1" colspan="1">F</td>
<td align="left" rowspan="1" colspan="1">p</td>
<td align="left" rowspan="1" colspan="1">F</td>
<td align="left" rowspan="1" colspan="1">p</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Midlines</bold>
</td>
<td align="left" rowspan="1" colspan="1">W</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">5.89</td>
<td align="left" rowspan="1" colspan="1">0.025</td>
<td align="left" rowspan="1" colspan="1">50.10</td>
<td align="left" rowspan="1" colspan="1"><0.001</td>
<td align="left" rowspan="1" colspan="1">5.51</td>
<td align="left" rowspan="1" colspan="1">0.029</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">M</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">6.78</td>
<td align="left" rowspan="1" colspan="1">0.017</td>
<td align="left" rowspan="1" colspan="1">10.99</td>
<td align="left" rowspan="1" colspan="1">0.004</td>
<td align="left" rowspan="1" colspan="1">7.58</td>
<td align="left" rowspan="1" colspan="1">0.012</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">T×W</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1">4.9
<xref ref-type="table-fn" rid="nt103"></xref>
</td>
<td align="left" rowspan="1" colspan="1">0.039
<xref ref-type="table-fn" rid="nt103"></xref>
</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">5.53</td>
<td align="left" rowspan="1" colspan="1">0.029</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">W×R</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">21.31</td>
<td align="left" rowspan="1" colspan="1"><0.001</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">M×R</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">6.52</td>
<td align="left" rowspan="1" colspan="1">0.019</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">W×M</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">7.14</td>
<td align="left" rowspan="1" colspan="1">0.015</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">T×W×R</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">4.65</td>
<td align="left" rowspan="1" colspan="1">0.044</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Laterals</bold>
</td>
<td align="left" rowspan="1" colspan="1">W</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">4.40</td>
<td align="left" rowspan="1" colspan="1">0.049</td>
<td align="left" rowspan="1" colspan="1">28.08</td>
<td align="left" rowspan="1" colspan="1"><0.001</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">M</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">7.06</td>
<td align="left" rowspan="1" colspan="1">0.015</td>
<td align="left" rowspan="1" colspan="1">6.08</td>
<td align="left" rowspan="1" colspan="1">0.023</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">T×W</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1">8.85
<xref ref-type="table-fn" rid="nt103"></xref>
</td>
<td align="left" rowspan="1" colspan="1">0.008
<xref ref-type="table-fn" rid="nt103"></xref>
</td>
<td align="left" rowspan="1" colspan="1">4.90
<xref ref-type="table-fn" rid="nt103"></xref>
</td>
<td align="left" rowspan="1" colspan="1">0.039
<xref ref-type="table-fn" rid="nt103"></xref>
</td>
<td align="left" rowspan="1" colspan="1">15.60</td>
<td align="left" rowspan="1" colspan="1"><0.001</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">4.52
<xref ref-type="table-fn" rid="nt103"></xref>
</td>
<td align="left" rowspan="1" colspan="1">0.046
<xref ref-type="table-fn" rid="nt103"></xref>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">W×R</td>
<td align="left" rowspan="1" colspan="1">2,40</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">26.01</td>
<td align="left" rowspan="1" colspan="1"><0.001</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">W×M</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">7.19</td>
<td align="left" rowspan="1" colspan="1">0.014</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">W×H</td>
<td align="left" rowspan="1" colspan="1">1,20</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">5.79</td>
<td align="left" rowspan="1" colspan="1">0.026</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">W×H×R</td>
<td align="left" rowspan="1" colspan="1">2,40</td>
<td align="left" rowspan="1" colspan="1">6.33</td>
<td align="left" rowspan="1" colspan="1">0.007</td>
<td align="left" rowspan="1" colspan="1">4.76</td>
<td align="left" rowspan="1" colspan="1">0.018</td>
<td align="left" rowspan="1" colspan="1">3.88</td>
<td align="left" rowspan="1" colspan="1">0.036</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">T×W×R</td>
<td align="left" rowspan="1" colspan="1">2,40</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">3.98</td>
<td align="left" rowspan="1" colspan="1">0.046</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">T×M×R</td>
<td align="left" rowspan="1" colspan="1">2,40</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">3.58</td>
<td align="left" rowspan="1" colspan="1">0.048</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt102">
<label></label>
<p>Results of ANOVAs computed on midline and lateral electrodes for main effects, 2-way and 3-way interactions. Only significant effects (p<0.05) are shown. Abbreviations: df, degrees of freedom; T, Attentional Task; W, Word; M, Melody; H, Hemisphere; R, Region.</p>
</fn>
<fn id="nt103">
<label></label>
<p>† Pairwise comparisons of interest did not meet the criteria for Bonferroni significance and thus the interaction is not discussed further in the text.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="pone-0009889-t003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009889.t003</object-id>
<label>Table 3</label>
<caption>
<title>Posthoc comparisons for Word x Melody interaction.</title>
</caption>
<alternatives>
<graphic id="pone-0009889-t003-3" xlink:href="pone.0009889.t003"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Pairwise Comparison</td>
<td align="left" rowspan="1" colspan="1">Midlines</td>
<td align="left" rowspan="1" colspan="1">Laterals</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">W = M =  vs. W = M≠</td>
<td align="left" rowspan="1" colspan="1">0.006*</td>
<td align="left" rowspan="1" colspan="1">0.004*</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">W = M =  vs. W≠M = </td>
<td align="left" rowspan="1" colspan="1"><0.001*</td>
<td align="left" rowspan="1" colspan="1"><0.001*</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">W = M =  vs. W≠M≠</td>
<td align="left" rowspan="1" colspan="1"><0.001*</td>
<td align="left" rowspan="1" colspan="1"><0.001*</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">W = M≠ vs. W≠M = </td>
<td align="left" rowspan="1" colspan="1">0.004*</td>
<td align="left" rowspan="1" colspan="1">0.032</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">W = M≠ vs. W≠M≠</td>
<td align="left" rowspan="1" colspan="1">0.02</td>
<td align="left" rowspan="1" colspan="1">0.092</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">W≠M =  vs. W≠M≠</td>
<td align="left" rowspan="1" colspan="1">0.493</td>
<td align="left" rowspan="1" colspan="1">0.596</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt104">
<label></label>
<p>Results of pairwise posthoc comparisons for the Word x Melody interaction, in the 300–500 ms latency band. Pairs that meet the criteria for significance with the Bonferroni threshold (p = 0.0083) are indicated with *.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>
<italic>Between 50 and 150 ms</italic>
, different words (W≠) elicited a larger N100 component than Same words (W = ) over the right frontal region (Word x Hemisphere x Region interaction; p<0.001). This effect was larger in the Linguistic Task than in the Musical Task at lateral electrodes (p = 0.021; see
<xref ref-type="fig" rid="pone-0009889-g002">Figure 2</xref>
), but this result did not reach significance after Bonferroni correction.</p>
<p>
<italic>Between 150 and 300 ms</italic>
, W≠ elicited a smaller P200 component than W =  (main effect of Word at both midline and lateral electrodes). This effect was more prominent over bilateral frontal and left parietal regions (Word x Hemisphere x Region; all p<0.001). Again, this effect was larger in the Linguistic than in the Musical Task at lateral electrodes (p = 0.011; see
<xref ref-type="fig" rid="pone-0009889-g002">Figure 2</xref>
) but this result was only marginally significant with the Bonferroni correction.</p>
<p>
<italic>Between 300 and 500 ms</italic>
, W≠ elicited a larger N400 component than W =  at both midline and lateral electrodes (main effect of Word), with larger differences over parieto-occipital than fronto-central midline electrodes (Word x Region interaction: both p<0.001), and over parietal and temporal lateral regions (Word x Region, both p<0.001), with a slight right hemisphere predominance (Word x Hemisphere x Region, both p<0.001). The N400 effect (W≠ minus W = ) was larger at lateral electrodes in the Linguistic (p<0.001) than in the Musical Task (p = 0.004; Task x Word) and at midlines (both p<0.001), with a centro-parietal scalp distribution in the Linguistic Task and a parietal distribution in the Musical Task (Task x Word x Region at midline and lateral electrodes, all p<0.001).</p>
<p>M≠ elicited larger N400-like components than M =  (main effect of Melody at midline and lateral electrodes; see
<xref ref-type="fig" rid="pone-0009889-g003">Figure 3</xref>
). Moreover, the Word x Melody interaction was significant at midline and at lateral electrodes: the Melody effect (M≠ vs. M = ) was only significant when Word was same (W = ) but not when Word was different (W≠; see
<xref ref-type="table" rid="pone-0009889-t003">Table 3</xref>
for all posthoc p-values for the Word x Melody interaction). Likewise, the Word effect was only significant when Melody was same (M = ) but not when Melody was different (M≠; see
<xref ref-type="fig" rid="pone-0009889-g004">Figure 4</xref>
, which shows the four orthogonal conditions averaged over both tasks). Furthermore, negative components in the W = M≠, W≠M = , and W≠M≠ conditions were larger than in W = M =  condition. At the midline electrodes, negative components were also larger in the W≠M =  than in the W = M≠ conditions.</p>
<fig id="pone-0009889-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009889.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Word by Melody interaction.</title>
<p>(A) For each of the 4 experimental conditions (averaged across both tasks because there was no Task x Word x Melody interaction): the reaction time in milliseconds (gray bars, left Y-axis) and the magnitude (µV) of the mean amplitude of the ERPs in the 300–500 ms latency range, averaged across all electrodes (black bars, right Y-axis). (B) ERPs associated with the 4 experimental conditions (averaged across both tasks because there was no Task x Word x Melody interaction) for electrodes Cz (top) and Pz (bottom). Solid line: same word, same melody; dotted line: same word, different melody; dashed line: different word, same melody; dashed-dotted line: different word, different melody.</p>
</caption>
<graphic xlink:href="pone.0009889.g004"></graphic>
</fig>
<p>To further test the Word by Melody interaction, difference waves were computed (on mean amplitudes) for each of the following comparisons:
<italic>d1 = </italic>
W≠M =  minus W = M =  (effect of Word when Melody is same);
<italic>d2 = </italic>
W = M≠ minus W = M =  (effect of Melody when Word is same);
<italic>d3</italic>
 = W≠M≠ minus W = M =  (effect of different Word and different Melody). If words and melodies are processed independently, then
<italic>d1</italic>
+
<italic>d2</italic>
should be equal to
<italic>d3</italic>
. ANOVAs with factor Data (double variation condition [
<italic>d3</italic>
] vs. additive model [
<italic>d1+d2</italic>
]) together with the other factors of interest (for midlines: Attentional Task and Region and for laterals: Attentional Task, Hemisphere, and Region) were carried out. Results showed that the sum of the ERP effects of the simple variations (d1 + d2) was significantly larger than the ERP effects in the double variations condition [d3; midline electrodes,
<italic>F</italic>
(1,20) = 7.14, p = 0.015; lateral electrodes,
<italic>F</italic>
(1,20) = 7.19, p = 0.014]; see
<xref ref-type="fig" rid="pone-0009889-g005">Figure 5</xref>
.</p>
<fig id="pone-0009889-g005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009889.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Additive model test.</title>
<p>Mean amplitude (in µV) of ERP difference waves in the 300–500 ms latency band, for double variations observed (W≠M≠ minus W = M = ) and the modeled sum of simple variations (W≠M =  minus W = M = ) + (W = M≠ minus W = M = ), at midline electrodes (dark gray bars) and lateral electrodes (light gray bars).</p>
</caption>
<graphic xlink:href="pone.0009889.g005"></graphic>
</fig>
<p>
<italic>Between 600 and 800 ms</italic>
, W≠ still elicited more negative ERPs than W =  (main effect of Word at midline electrodes) but M≠ elicited larger late positive components than M =  (main effect of Melody at midline and lateral electrodes, see
<xref ref-type="fig" rid="pone-0009889-g003">Figure 3</xref>
). At the midline electrodes, this effect was larger over the fronto-central region than the parieto-occipital region (both p<0.001; Melody x Region); furthermore, at lateral electrodes, the effect was larger over temporal and parietal regions (both p<0.001) in the Linguistic Task but was larger over frontal regions (p<0.001) in the Musical Task (Task x Melody x Region).</p>
<p>
<italic>Between 800 and 1000 ms</italic>
, W≠ still elicited larger negativities than W =  over the right hemisphere (p = 0.002; Word x Hemisphere). This effect was larger in the Linguistic than in the Musical Task (p = 0.017) but this difference did not reach significance with the Bonferroni correction. Finally, M≠ still elicited larger positive components than M =  (main effect of Melody at midline electrodes).</p>
</sec>
<sec id="s3c">
<title>Scalp distribution of the N1, P2, and N400 components (Word effects)</title>
<p>ERPs in the N1, P2, and N400 latency bands were more negative for different word than for same word. These effects may consequently reflect an early onset of the N400 effect, or three distinct components. Since different scalp distributions were found in each of the three latency bands tested separately, it was therefore of interest to directly compare the Word effect (W≠ minus W = ) across latency bands. To this end, we conducted additional ANOVAs on the difference waves, with factors: Latency Band (50–150 ms vs. 150–300 ms vs. 300–500 ms), Hemisphere (left vs. right), and Region (frontal vs. temporal vs. parietal). Results showed a significant Latency band x Region interaction [
<italic>F</italic>
(4,80) = 43.15, p<0.001]. While there were no significant differences in scalp distribution between the effect of Word in the 50–150 ms (N1) and in the 150–300 ms (P2) latency bands, the topography of the N400 (300–500 ms) was different from both the N1 and the P2. Pairwise posthoc comparisons showed that the N400 had a more parietal distribution compared to the N1 (p<0.001) and the P2 (p<0.001). The Latency x Hemisphere x Region interaction was not significant.</p>
<p>In order to prevent the topographical shape of the ERPs from being potentially confounded by the amplitude of ERP effects, the same statistical analysis was then repeated on data that had undergone vector scaling (c.f.
<xref ref-type="bibr" rid="pone.0009889-McCarthy1">[72]</xref>
, but see also
<xref ref-type="bibr" rid="pone.0009889-Urbach1">[73]</xref>
for a discussion of the limitations of this method). The Latency x Region interaction was again significant [
<italic>F</italic>
(4,80) = 21.22, p<0.001], and pairwise posthoc tests showed the same pattern of results as in the unscaled data. This analysis therefore confirmed that the frontal distribution of the early negativities (N1/P2 complex) is significantly different from the parietal distribution of the N400.</p>
</sec>
</sec>
<sec id="s4">
<title>Discussion</title>
<sec id="s4a">
<title>Processing the words</title>
<p>As predicted on the basis of several results in both the behavioral (e.g.,
<xref ref-type="bibr" rid="pone.0009889-Meyer1">[59]</xref>
) and neurolinguistic literatures (e.g.,
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Holcomb1">[54]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Kutas1">[55]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-McCallum1">[57]</xref>
), sung word targets that were different from sung word primes (W≠) were associated with lower levels of performance (more errors and slower RTs) and with larger N400 components than same words (W = ). Thus, as noted in
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
, similar processes seem to be involved in accessing the meaning of spoken and sung words. One could argue that access to word meaning was not necessary to perform the Linguistic Task and that participants could have based their decision on phonological cues. However, this is unlikely as previous work on spoken words has demonstrated that word meaning is processed automatically in phonological tasks
<xref ref-type="bibr" rid="pone.0009889-Perrin1">[74]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Relander1">[75]</xref>
, prosodic tasks
<xref ref-type="bibr" rid="pone.0009889-Astsano2">[76]</xref>
<xref ref-type="bibr" rid="pone.0009889-Magne1">[78]</xref>
, during passive listening in the waking state
<xref ref-type="bibr" rid="pone.0009889-Perrin1">[74]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Relander1">[75]</xref>
, and even during sleep
<xref ref-type="bibr" rid="pone.0009889-Ibez1">[79]</xref>
.</p>
<p>Moreover, the finding that an N400 word effect also developed in the Musical Task, with similar onset latency and duration (until around 800 ms post-target onset), and a similar scalp distribution in the 300–500 ms latency range as in the Linguistic Task (centro-parietal for language and parietal for music; see
<xref ref-type="fig" rid="pone-0009889-g002">Figure 2</xref>
), also provides evidence in favor of the automatic processing of sung word meaning regardless of the direction of attention. The smaller size of the N400 effect in the Musical than in the Linguistic Task was most likely due to fewer attentional resources being available for processing words in the Musical Task (attention focused on the melody) than in the Linguistic Task (attention focused on words), as has been argued previously
<xref ref-type="bibr" rid="pone.0009889-Relander1">[75]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Astsano2">[76]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Magne1">[78]</xref>
.</p>
<p>Early Word effects were also found with larger N100 components in the 50–150 ms latency band and smaller P200 components in the 150–300 ms latency band over frontal regions to different (W≠) than same words (W = ; see
<xref ref-type="fig" rid="pone-0009889-g002">Figure 2</xref>
). Even though both same and different words started with the same first syllable, which lasted for 250 ms on average, subtle articulation differences (in particular, in vowel quality and pitch of the sung syllable) were most likely present in the first syllable of different target words (e.g., the “me” in “messager” does not sound identical to the “me” in “mélodie”). Moreover, even though the post-hoc comparison for the Task by Word interaction was not significant after Bonferroni correction between 50–150 ms and between 150–300 ms (probably because task differences were too small), it is clear from
<xref ref-type="fig" rid="pone-0009889-g002">Figure 2</xref>
that the N100 and P200 effects were primarily present when participants attended to the words. Attending to the linguistic dimension may have amplified participants' sensitivity to small differences in co-articulation, which in turn influenced the early perception of sung words, just as subtle phonetic differences modulate the N100 in speech perception
<xref ref-type="bibr" rid="pone.0009889-Digeser1">[80]</xref>
. This interpretation is supported by the vowel harmony phenomenon described by Nguyen & Fagyal
<xref ref-type="bibr" rid="pone.0009889-Nguyen2">[81]</xref>
, in which the pronunciation of the vowel of the first syllable assimilates to the anticipated vowel of the second syllable, which was indeed different in the W≠ conditions. We also considered the idea that the early N100 and P200 effects were the leading edge of the N400 component, in light of previous reports demonstrating the early onset of the auditory N400 effect
<xref ref-type="bibr" rid="pone.0009889-Hagoort1">[82]</xref>
, possibly reflecting the fact that lexico-semantic processing starts before the spoken word can be fully identified
<xref ref-type="bibr" rid="pone.0009889-VanPetten1">[83]</xref>
. However, this interpretation seems unlikely in view of the results of the scalp distribution analysis that demonstrated a significant difference between the frontally-distributed early negativities and parietally-distributed N400.</p>
</sec>
<sec id="s4b">
<title>Processing the melody</title>
<p>Different melodies (M≠) compared to same melodies (M = ) elicited larger negative components between 300 and 500 ms, followed by larger late positive components in the 600–1000 ms latency band.</p>
<p>The P600 component was expected based on previous reports showing that unexpected melodic/harmonic variations (e.g.,
<xref ref-type="bibr" rid="pone.0009889-Besson4">[61]</xref>
<xref ref-type="bibr" rid="pone.0009889-Verleger1">[64]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Janata1">[84]</xref>
) elicit effects belonging to the P300 family of components. These effects are generally interpreted as reflecting the processing of surprising and task-relevant stimuli
<xref ref-type="bibr" rid="pone.0009889-Kutas3">[85]</xref>
<xref ref-type="bibr" rid="pone.0009889-Donchin1">[87]</xref>
and are indicative of the allocation of attention and memory resources (see Polich
<xref ref-type="bibr" rid="pone.0009889-Polich1">[88]</xref>
for a recent review and discussion of functionally divergent P3 subcomponents). The longer onset latency of the positive effect in the present experiment than in previous studies is probably due to the fact that the first note of the melody was the same in both the M≠ and M =  conditions, with the second note being sung at around 250 ms post-onset of the target. Interestingly, the task did influence the scalp distribution of the late positivity, which was frontal when the melodies were explicitly processed (Musical Task) and parietal when the melodies were implicitly processed (Linguistic Task). The frontal scalp distribution of the positive component in the Musical Task is consistent with the scalp distribution of the P3a component reported for chord sequences ending with dissonant harmonies
<xref ref-type="bibr" rid="pone.0009889-Janata1">[84]</xref>
and harmonically acceptable chords with deviant timbre
<xref ref-type="bibr" rid="pone.0009889-Carrion1">[89]</xref>
. The parietal scalp distribution of the positive component in the Linguistic Task is consistent with previous results when participants were asked to pay attention to both lyrics and tunes
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
.</p>
<p>Finally, it is interesting to note that late positivities, i.e., the late positive potential (LPP), have also been observed during the evaluation of affective stimuli
<xref ref-type="bibr" rid="pone.0009889-Cunningham1">[90]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Pastor1">[91]</xref>
, such as tones sung with a sad voice presented simultaneously with sad pictures
<xref ref-type="bibr" rid="pone.0009889-Spreckelmeyer1">[92]</xref>
. In the present study, the musical dimension of the sung words, although minimal, may have called upon emotional processes, reflected by the late positivities. Further work on the emotional response to singing may clarify these issues.</p>
<p>One of the most interesting findings of the present study is that, prior to the late positive components, M≠ also elicited widely distributed, larger negative components than M =  in the 300–500 ms latency band in both the Linguistic and Musical tasks (see
<xref ref-type="fig" rid="pone-0009889-g003">Figure 3</xref>
). This negativity bears the scalp distribution and peak latency typically seen for the N400 component. Indeed, N400's have been recently associated with musical incongruities related to memory and emotional meaning, such as in familiar melodies containing an unexpected but harmonically congruous note
<xref ref-type="bibr" rid="pone.0009889-Miranda1">[66]</xref>
, or when a mismatch ensues between musical chords and emotion words (e.g., a dissonant chord target primed by the visually presented word “love”)
<xref ref-type="bibr" rid="pone.0009889-Steinbeis2">[18]</xref>
. However, the N400 Melody effect in the present study was slightly smaller in amplitude than the N400 Word effect at the midline electrodes. The difference between these effects may be due to an overlap with the subsequent late positive component generated in the M≠ but not in the W≠ condition, but could also result from greater intrinsic salience of the linguistic dimension in songs
<xref ref-type="bibr" rid="pone.0009889-Serafine1">[30]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Peretz3">[31]</xref>
.</p>
<p>Thus, in both attentional tasks, words sung on different melodies (M≠) were associated with larger N400 components than words sung on same melodies (M = ). Since the intonational contour of lyrics in song is provided by the musical melody, it has been suggested that the variations in prosodic-like effects for sung lyrics could explain why words in song are better recognized with their original melodies than with a different melody
<xref ref-type="bibr" rid="pone.0009889-Serafine2">[93]</xref>
. In fact, several recent studies show that words spoken with prosodically incongruous patterns are associated with increased amplitudes of the N400 component followed by late positivities
<xref ref-type="bibr" rid="pone.0009889-Magne1">[78]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Mietz1">[94]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-SchmidtKassow1">[95]</xref>
. Thus, words sung on different melodies may hinder lexical access in a similar manner as unexpected prosodic patterns in spoken language. If familiarity is established through repeated listening to a song, which may reinforce prosodic representations of the words that are created by the melody, then the present findings may be better understood in light of results obtained by Thompson & Russo
<xref ref-type="bibr" rid="pone.0009889-Thompson1">[45]</xref>
. They showed that participants perceived the meaning of song lyrics as enhanced when familiarity with the songs was increased (see section 6.4 in
<xref ref-type="bibr" rid="pone.0009889-Patel2">[5]</xref>
for an interesting discussion of those results). We could thus speculate that our participants'
<italic>lexico-semantic expectations for sung words</italic>
were violated not only when the target word was different from the prime (W≠M =  condition) but also when the target melody was different from the prime (W = M≠). This interpretation accounts for the N400 effects associated with differences on each dimension as they stand in contrast to the tight perceptual combination of repeated words and melodies (W = M = ). Further work is needed to differentiate how variations in the musical dimension of songs affect lexical access
<xref ref-type="bibr" rid="pone.0009889-Lau1">[96]</xref>
, general semantic memory
<xref ref-type="bibr" rid="pone.0009889-Kutas4">[97]</xref>
, and conceptual relatedness
<xref ref-type="bibr" rid="pone.0009889-Schn1">[20]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Daltrozzo1">[21]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Aramaki1">[98]</xref>
. For instance, future studies using pairs of sung words that are semantically related to each other, or sung word targets primed by other meaningful stimuli (e.g. pictures, environmental sounds, or meaningful musical excerpts), could elucidate the dynamics of the N400 component in song.</p>
<p>Overall, results showed that N400 components are generated when the target does not match the prime in pairs of sung words on either dimension (linguistic or musical). It must be emphasized here that these results were found regardless of the direction of attention, thereby reflecting the automatic processing of the linguistic and musical dimensions when words are sung. This pattern of results may also reflect the inability of participants to selectively focus their attention on the words or on the melodies, precisely because the two dimensions cannot be separated. We explore this possibility next.</p>
</sec>
<sec id="s4c">
<title>Interactive processing</title>
<p>Both behavioral and ERP data in the N400 latency band clearly revealed interactive processing of the linguistic and musical dimensions in song, which occur simultaneously in sung words. This interaction was found independently of the direction of attention (i.e., in both the Linguistic and Musical tasks and furthermore in the absence of a Task by Word by Melody interaction). Moreover, results of an ANOVA on the difference waves did demonstrate that the theoretical sum of the ERPs for simple linguistic and musical variations was significantly larger than the actual ERP in the double variation condition (see also
<xref ref-type="fig" rid="pone-0009889-g005">Figure 5</xref>
). Therefore, an additive model did not account for the data reported here. Furthermore, the pattern of interaction is strikingly symmetric between the two dimensions. The N400 word effect (different vs. same words) only occurs when melodies are the same; likewise, the N400 melody effect (different vs. same melodies) and the effect on RTs (slower for M≠ than M = ) only occur when words are same but not when words are different, as illustrated in
<xref ref-type="fig" rid="pone-0009889-g004">Figure 4</xref>
. These findings coincide with previous studies of sung and spoken language that have documented an influence of the musical dimension on linguistic processing, even when attention is directed to the linguistic aspect
<xref ref-type="bibr" rid="pone.0009889-Kolinsky1">[43]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Bigand1">[44]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-PoulinCharronnat1">[46]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Fedorenko1">[47]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Slevc1">[99]</xref>
. Thus, the main conclusion that can be drawn from these results is that words and melody are closely interwoven in early stages of cognitive processing. This outcome is compatible with a recent report by Lidji et al.
<xref ref-type="bibr" rid="pone.0009889-Lidji1">[49]</xref>
of ERP evidence for interactive processing between vowel and pitch in song perception. The spatio-temporal brain dynamics of this integrated response could be responsible for interactive effects between word and melody in song, observed in a growing number of behavioral studies on perception
<xref ref-type="bibr" rid="pone.0009889-Kolinsky1">[43]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Bigand1">[44]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-PoulinCharronnat1">[46]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Fedorenko1">[47]</xref>
, learning
<xref ref-type="bibr" rid="pone.0009889-Schn2">[35]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Thiessen1">[36]</xref>
, and memory
<xref ref-type="bibr" rid="pone.0009889-Serafine1">[30]</xref>
<xref ref-type="bibr" rid="pone.0009889-Rainey1">[33]</xref>
.</p>
<p>Some important differences between our protocol using sung word pairs and previous studies using opera excerpts
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Bonnel1">[41]</xref>
can provide an explanation for why we did not find the same tendency toward independence of neural and behavioral correlates associated with the perception of words and melodies. First, the type of same-different task employed in the present study on stimulus pairs, but not in
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
and
<xref ref-type="bibr" rid="pone.0009889-Bonnel1">[41]</xref>
, has been previously used by Miranda & Ullman
<xref ref-type="bibr" rid="pone.0009889-Miranda1">[66]</xref>
to show that notes that are tonally congruous (in-key) but incorrect in familiar melodies elicit both the N400 and P600 components, even when participants' attention was directed away from pitch. Furthermore, the violation paradigm used by Besson et al.
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
and Bonnel et al.
<xref ref-type="bibr" rid="pone.0009889-Bonnel1">[41]</xref>
, in which the last note of the sung phrase of the opera excerpt was not only unexpected in the context but also out-of-key, may have made wrong notes more salient for the listener than the more subtle different melody targets used in the present experiment. Indeed, even when the target melody was different than the prime, it contained tonal intervals in a reduced harmonic context. In fact, subtle stimulus variations have been used in several studies reporting interaction of linguistic and musical processing, such as the interference of harmony on phonological and semantic processing
<xref ref-type="bibr" rid="pone.0009889-Bigand1">[44]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-PoulinCharronnat1">[46]</xref>
or the interaction of semantics and harmony
<xref ref-type="bibr" rid="pone.0009889-Steinbeis1">[17]</xref>
.</p>
<p>Nevertheless, it should be noted that the present results also provide some evidence for separate effects associated with the linguistic and musical dimensions. First, RTs were slower for different than same words regardless of whether melodies were same or different (but, as mentioned above, RTs were slower for different than for same melodies only when words were same). This slightly asymmetric pattern of interferences may be related to the fact that our non-musician participants were less accustomed to making explicit judgments about melodic information than linguistic information, as demonstrated by slower RTs in the Musical Task than in the Linguistic Task. These results correspond to those obtained in the first of a series of experiments on non-musicians by Kolinsky et al.
<xref ref-type="bibr" rid="pone.0009889-Kolinsky1">[43]</xref>
showing slower reaction times in the melodic than phonological task, in addition to an enhanced interference effect between phonology and intervals in the melodic task.</p>
<p>Second, while early differences were found in the 50–150 and 150–300 ms latency bands were found between same and different words (independently of the melodies), no such early differences were observed between same and different melodies. As discussed above, these early differences mostly likely reflect an effect of co-articulation caused by phonetic differences already present in the first syllable of different words rather than an early onset of the N400 word effect.</p>
<p>Finally, differences in the late positivity were found between same and different melodies but not between same and different words. As mentioned above, results of several experiments have shown increased P3 components to unexpected variations in melody or harmony
<xref ref-type="bibr" rid="pone.0009889-Besson2">[40]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Besson4">[61]</xref>
<xref ref-type="bibr" rid="pone.0009889-Verleger1">[64]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Janata1">[84]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Carrion1">[89]</xref>
, typically interpreted as reflecting the allocation of attention and memory resources to task-relevant stimuli
<xref ref-type="bibr" rid="pone.0009889-Kutas3">[85]</xref>
<xref ref-type="bibr" rid="pone.0009889-Polich1">[88]</xref>
. The late positivity in the present study may also be related to the LPP, which is associated with the processing of affective stimuli
<xref ref-type="bibr" rid="pone.0009889-Cunningham1">[90]</xref>
<xref ref-type="bibr" rid="pone.0009889-Spreckelmeyer1">[92]</xref>
. Based on these accounts, the absence of a difference in late positive components for words may reflect the fact that they were easier to process than melodies (thereby requiring fewer attentional and memory resources) or that they did not elicit an emotional response. This last interpretation could be tested in further experiments by using affective sung words as targets.</p>
<p>To summarize, the present results show that N400 components were elicited not only by different words but also by different melodies, although the effect of melody began later and was followed by a late positive component. Moreover, the effects of melody and word were interactive between 300 and 500 ms, thereby showing that lyrics and tunes are intertwined in sung word cognition. A companion study conducted in our lab with the fMRI method, using the same stimuli and attentional tasks, also yielded robust interactions between words and melody in songs in a network of brain regions typically involved in language and music perception
<xref ref-type="bibr" rid="pone.0009889-Schn3">[100]</xref>
. These results are consistent with a growing number of studies establishing that language and music share neural resources through interactive phonological/semantic and melodic/harmonic processing (cf.
<xref ref-type="bibr" rid="pone.0009889-Patel2">[5]</xref>
).</p>
<p>The present findings, along with other recent work on song perception and performance, are beginning to respond to the question of why song is, and has been since prehistoric times
<xref ref-type="bibr" rid="pone.0009889-Brown1">[23]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Mithen1">[24]</xref>
, so prevalent in the music perception and performance activities occurring in most humans' daily lives. Intrinsic shared mechanisms between words and melody may be involved in a number of song-related behaviors that have shaped human nature, although we do not yet know if the linguistic-musical interactions are the cause or effect of these tendencies. For example, it appears that infants' preference for singing over speech
<xref ref-type="bibr" rid="pone.0009889-Nakata1">[26]</xref>
cannot be merely attributed to the presence of the musical dimension
<xref ref-type="bibr" rid="pone.0009889-delEtoile1">[27]</xref>
and may reflect a specific proclivity for singing-based mother-infant interactions. In early humans, adding melody to speech may have fostered parent-infant bonding and thus given an evolutionary advantage to individuals possessing more highly developed musical traits
<xref ref-type="bibr" rid="pone.0009889-Dissanayake1">[101]</xref>
. Singing to children fosters language acquisition, perhaps because exaggerated prosody aids segmentation
<xref ref-type="bibr" rid="pone.0009889-Bergeson1">[102]</xref>
and the added musical information provides redundant cues for learning
<xref ref-type="bibr" rid="pone.0009889-Schn2">[35]</xref>
,
<xref ref-type="bibr" rid="pone.0009889-Thiessen1">[36]</xref>
. Melody in song may also serve as a mnemonic for storage of words in long-term memory (e.g.,
<xref ref-type="bibr" rid="pone.0009889-Rainey1">[33]</xref>
). Research along these lines may also begin to shed light on the mechanisms responsible for the benefits of Melodic Intonation Therapy and other singing-based music therapy techniques in the speech rehabilitation process
<xref ref-type="bibr" rid="pone.0009889-Norton1">[103]</xref>
.</p>
</sec>
</sec>
<sec sec-type="supplementary-material" id="s5">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0009889.s001">
<label>Audio S1</label>
<caption>
<p>Example of stimulus pair in condition same word/same melody (W =  M = ).</p>
<p>(0.22 MB WAV)</p>
</caption>
<media xlink:href="pone.0009889.s001.wav" mimetype="audio" mime-subtype="wav">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0009889.s002">
<label>Audio S2</label>
<caption>
<p>Example of stimulus pair in condition same word/different melody (W =  M≠).</p>
<p>(0.22 MB WAV)</p>
</caption>
<media xlink:href="pone.0009889.s002.wav" mimetype="audio" mime-subtype="wav">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0009889.s003">
<label>Audio S3</label>
<caption>
<p>Example of stimulus pair in condition different word/same melody (W≠ M = )</p>
<p>(0.22 MB WAV)</p>
</caption>
<media xlink:href="pone.0009889.s003.wav" mimetype="audio" mime-subtype="wav">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0009889.s004">
<label>Audio S4</label>
<caption>
<p>Example of stimulus pair in condition different word/different melody (W≠ M≠).</p>
<p>(0.22 MB WAV)</p>
</caption>
<media xlink:href="pone.0009889.s004.wav" mimetype="audio" mime-subtype="wav">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0009889.s005">
<label>Appendix S1</label>
<caption>
<p>Pairs of sung words in each of the four experimental conditions, in one list of the Latin Square design (the first author can be contacted to obtain the other three lists), with each trisyllabic French word and the 3-note melody on which it was sung (one note per syllable). The melodies are represented in standard MIDI codes, where: C4 = 60, C#4 = 61, D4 = 62, D#4 = 63, E4 = 64, F4 = 65; F#4 = 66; G4 = 67; G#4 = 68; A4 = 69; A#4 = 70; B4 = 71, C5 = 72, and so on.</p>
<p>(0.24 MB DOC)</p>
</caption>
<media xlink:href="pone.0009889.s005.doc" mimetype="application" mime-subtype="msword">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>The authors gratefully acknowledge Vanina Luigi, Elaine Ne and Monique Chiambretto for their technical assistance; Sølvi Ystad for assistance with recording the stimuli; Serge Charron for singing the stimuli; the Laboratoire de Mécanique et d'Acoustique in Marseille for allowing us to use their anechoic room for sound recordings; Jill Cuadra for proofreading; and Edward Large and three anonymous Reviewers for their helpful comments on previous versions of the manuscript.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0009889-Besson1">
<label>1</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Schön</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Comparison between language and music.</article-title>
<source>Annals of the New York Academy of Sciences</source>
<volume>930</volume>
<fpage>232</fpage>
<lpage>258</lpage>
<pub-id pub-id-type="pmid">11458832</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Patel1">
<label>2</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>AD</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>Is music autonomous from language? A neuropsychological appraisal.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Deliege</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Sloboda</surname>
<given-names>J</given-names>
</name>
</person-group>
<source>Perception and cognition of music</source>
<publisher-loc>London</publisher-loc>
<publisher-name>Erlbaum Psychology Press</publisher-name>
<fpage>191</fpage>
<lpage>215</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Peretz1">
<label>3</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Coltheart</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Modularity of music processing.</article-title>
<source>Nature Neuroscience</source>
<volume>6</volume>
<fpage>688</fpage>
<lpage>691</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Koelsch1">
<label>4</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Neural substrates of processing syntax and semantics in music.</article-title>
<source>Current Opinion in Neurobiology</source>
<volume>15</volume>
<fpage>207</fpage>
<lpage>212</lpage>
<pub-id pub-id-type="pmid">15831404</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Patel2">
<label>5</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>AD</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Music, Language, and the Brain.</article-title>
<publisher-loc>New York</publisher-loc>
<publisher-name>Oxford University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="pone.0009889-Peretz2">
<label>6</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Kolinsky</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Tramo</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Labrecque</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Hublet</surname>
<given-names>C</given-names>
</name>
<etal></etal>
</person-group>
<year>1994</year>
<article-title>Functional dissociations following bilateral lesions of auditory cortex.</article-title>
<source>Brain</source>
<volume>117 (Pt 6)</volume>
<fpage>1283</fpage>
<lpage>1301</lpage>
<pub-id pub-id-type="pmid">7820566</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Hbert1">
<label>7</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hébert</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Are text and tune of familiar songs separable by brain damage?</article-title>
<source>Brain and Cognition</source>
<volume>46</volume>
<fpage>169</fpage>
<lpage>175</lpage>
<pub-id pub-id-type="pmid">11527321</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Racette1">
<label>8</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Racette</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Bard</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Making non-fluent aphasics speak: sing along!</article-title>
<source>Brain</source>
<volume>129</volume>
<fpage>2571</fpage>
<lpage>2584</lpage>
<pub-id pub-id-type="pmid">16959816</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Schmithorst1">
<label>9</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schmithorst</surname>
<given-names>VJ</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Separate cortical networks involved in music perception: preliminary functional MRI evidence for modularity of music processing.</article-title>
<source>NeuroImage</source>
<volume>25</volume>
<fpage>444</fpage>
<lpage>451</lpage>
<pub-id pub-id-type="pmid">15784423</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Patel3">
<label>10</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>AD</given-names>
</name>
<name>
<surname>Gibson</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Ratner</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Holcomb</surname>
<given-names>PJ</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Processing syntactic relations in language and music: an event-related potential study.</article-title>
<source>Journal of Cognitive Neuroscience</source>
<volume>10</volume>
<fpage>717</fpage>
<lpage>733</lpage>
<pub-id pub-id-type="pmid">9831740</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Maess1">
<label>11</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maess</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Gunter</surname>
<given-names>TC</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Musical syntax is processed in Broca's area: an MEG study.</article-title>
<source>Nature Neuroscience</source>
<volume>4</volume>
<fpage>540</fpage>
<lpage>545</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Levitin1">
<label>12</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levitin</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Menon</surname>
<given-names>V</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Musical structure is processed in “language” areas of the brain: a possible role for Brodmann Area 47 in temporal coherence.</article-title>
<source>NeuroImage</source>
<volume>20</volume>
<fpage>2142</fpage>
<lpage>2152</lpage>
<pub-id pub-id-type="pmid">14683718</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Koelsch2">
<label>13</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Toward the neural basis of processing structure in music. Comparative results of different neurophysiological investigation methods.</article-title>
<source>Annals of the New York Academy of Sciences</source>
<volume>999</volume>
<fpage>15</fpage>
<lpage>28</lpage>
<pub-id pub-id-type="pmid">14681114</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Gelfand1">
<label>14</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gelfand</surname>
<given-names>JR</given-names>
</name>
<name>
<surname>Bookheimer</surname>
<given-names>SY</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Dissociating neural mechanisms of temporal sequencing and processing phonemes.</article-title>
<source>Neuron</source>
<volume>38</volume>
<fpage>831</fpage>
<lpage>842</lpage>
<pub-id pub-id-type="pmid">12797966</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Hickok1">
<label>15</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hickok</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Buchsbaum</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Humphries</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Muftuler</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt.</article-title>
<source>Journal of Cognitive Neuroscience</source>
<volume>15</volume>
<fpage>673</fpage>
<lpage>682</lpage>
<pub-id pub-id-type="pmid">12965041</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Koelsch3">
<label>16</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kasper</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Sammler</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Schulze</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Gunter</surname>
<given-names>T</given-names>
</name>
<etal></etal>
</person-group>
<year>2004</year>
<article-title>Music, language and meaning: brain signatures of semantic processing.</article-title>
<source>Nature Neuroscience</source>
<volume>7</volume>
<fpage>302</fpage>
<lpage>307</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Steinbeis1">
<label>17</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Steinbeis</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns.</article-title>
<source>Cerebral Cortex</source>
<volume>18</volume>
<fpage>1169</fpage>
<lpage>1178</lpage>
<pub-id pub-id-type="pmid">17720685</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Steinbeis2">
<label>18</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Steinbeis</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Comparing the processing of music and language meaning using EEG and FMRI provides evidence for similar and distinct neural representations.</article-title>
<source>PLoS One</source>
<volume>3</volume>
<fpage>e2226</fpage>
<pub-id pub-id-type="pmid">18493611</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Frey1">
<label>19</label>
<mixed-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Frey</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Marie</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Prod'Homme</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Timsit-Berthier</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Schön</surname>
<given-names>D</given-names>
</name>
<etal></etal>
</person-group>
<year>2009</year>
<article-title>Temporal Semiotic Units as Minimal Meaningful Units in Music? An Electrophysiological Approach.</article-title>
<source>Music Perception</source>
<fpage>247</fpage>
<lpage>256</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Schn1">
<label>20</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schön</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Ystad</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kronland-Martinet</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>(in press) The evocative power of sounds: EEG study of conceptual priming between words and nonverbal sounds.</article-title>
<source>Journal of Cognitive Neuroscience</source>
</mixed-citation>
</ref>
<ref id="pone.0009889-Daltrozzo1">
<label>21</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Daltrozzo</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Schön</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Is conceptual processing in music automatic? An electrophysiological approach.</article-title>
<source>Brain Res</source>
<volume>1270</volume>
<fpage>88</fpage>
<lpage>94</lpage>
<pub-id pub-id-type="pmid">19306846</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Gordon1">
<label>22</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gordon</surname>
<given-names>RL</given-names>
</name>
<name>
<surname>Racette</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Schön</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Sensory-Motor Networks in Singing and Speaking: a comparative approach.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Altenmüller</surname>
<given-names>E</given-names>
</name>
</person-group>
<source>Music, Motor Control and the Brain</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>Oxford University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="pone.0009889-Brown1">
<label>23</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Brown</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>The “musilanguage” model of music evolution</article-title>
<person-group person-group-type="editor">
<name>
<surname>Wallin</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Merker</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>S</given-names>
</name>
</person-group>
<source>The Origins of Music</source>
<publisher-loc>Cambridge, MA</publisher-loc>
<publisher-name>MIT Press</publisher-name>
<fpage>271</fpage>
<lpage>300</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Mithen1">
<label>24</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Mithen</surname>
<given-names>SJ</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>The Singing Neanderthals: the origins of music, language, mind and body.</article-title>
<publisher-loc>London</publisher-loc>
<publisher-name>Weidenfeld & Nicolson</publisher-name>
</mixed-citation>
</ref>
<ref id="pone.0009889-Patel4">
<label>25</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Musical Rhythm, Linguistic Rhythm, and Human Evolution.</article-title>
<source>Music Perception</source>
<volume>24</volume>
<fpage>99</fpage>
<lpage>104</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Nakata1">
<label>26</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nakata</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Infants' responsiveness to maternal speech and singing.</article-title>
<source>Infant Behavior & Development</source>
<volume>27</volume>
<fpage>455</fpage>
<lpage>464</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-delEtoile1">
<label>27</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>de l'Etoile</surname>
<given-names>SK</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Infant behavioral responses to infant-directed singing and other maternal interactions.</article-title>
<source>Infant Behavior & Development</source>
<volume>29</volume>
<fpage>456</fpage>
<lpage>470</lpage>
<pub-id pub-id-type="pmid">17138298</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Bartholomeus1">
<label>28</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bartholomeus</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>1974</year>
<article-title>Effects of task requirements on ear superiority for sung speech.</article-title>
<source>Cortex</source>
<volume>10</volume>
<fpage>215</fpage>
<lpage>223</lpage>
<pub-id pub-id-type="pmid">16295094</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Goodglass1">
<label>29</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goodglass</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Calderon</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>1977</year>
<article-title>Parallel processing of verbal and musical stimuli in right and left hemispheres.</article-title>
<source>Neuropsychologia</source>
<volume>15</volume>
<fpage>397</fpage>
<lpage>407</lpage>
<pub-id pub-id-type="pmid">854158</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Serafine1">
<label>30</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Serafine</surname>
<given-names>ML</given-names>
</name>
<name>
<surname>Crowder</surname>
<given-names>RG</given-names>
</name>
<name>
<surname>Repp</surname>
<given-names>BH</given-names>
</name>
</person-group>
<year>1984</year>
<article-title>Integration of melody and text in memory for songs.</article-title>
<source>Cognition</source>
<volume>16</volume>
<fpage>285</fpage>
<lpage>303</lpage>
<pub-id pub-id-type="pmid">6541107</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Peretz3">
<label>31</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Radeau</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Arguin</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Two-way interactions between music and language: evidence from priming recognition of tune and lyrics in familiar songs.</article-title>
<source>Memory & Cognition</source>
<volume>32</volume>
<fpage>142</fpage>
<lpage>152</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Wallace1">
<label>32</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wallace</surname>
<given-names>WT</given-names>
</name>
</person-group>
<year>1994</year>
<article-title>Memory for music: Effect of melody recall on text.</article-title>
<source>Journal of Experimental Psychology</source>
<volume>20</volume>
<fpage>1471</fpage>
<lpage>1485</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Rainey1">
<label>33</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rainey</surname>
<given-names>DW</given-names>
</name>
<name>
<surname>Larsen</surname>
<given-names>JD</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>The Effect of Familiar Melodies on Initial Learning and Long-term Memory for Unconnected Text.</article-title>
<source>Music Perception</source>
<volume>20</volume>
<fpage>173</fpage>
<lpage>186</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Kilgour1">
<label>34</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kilgour</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Jakobson</surname>
<given-names>LS</given-names>
</name>
<name>
<surname>Cuddy</surname>
<given-names>LL</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Music training and rate of presentation as mediators of text and song recall.</article-title>
<source>Memory & Cognition</source>
<volume>28</volume>
<fpage>700</fpage>
<lpage>710</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Schn2">
<label>35</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schön</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Boyer</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Moreno</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
<etal></etal>
</person-group>
<year>2008</year>
<article-title>Songs as an aid for language acquisition.</article-title>
<source>Cognition</source>
<volume>106</volume>
<fpage>975</fpage>
<lpage>983</lpage>
<pub-id pub-id-type="pmid">17475231</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Thiessen1">
<label>36</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thiessen</surname>
<given-names>ED</given-names>
</name>
<name>
<surname>Saffran</surname>
<given-names>JR</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>How the melody facilitates the message and vice versa in infant learning and memory.</article-title>
<source>Ann N Y Acad Sci</source>
<volume>1169</volume>
<fpage>225</fpage>
<lpage>233</lpage>
<pub-id pub-id-type="pmid">19673786</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Koneni1">
<label>37</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Konečni</surname>
<given-names>VJ</given-names>
</name>
</person-group>
<year>1984</year>
<article-title>Elusive effects of artists' “messages”.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Crozier</surname>
<given-names>WR</given-names>
</name>
<name>
<surname>Chapman</surname>
<given-names>AJ</given-names>
</name>
</person-group>
<source>Cognitive Processes in the Perception of Art</source>
<publisher-loc>Amsterdam</publisher-loc>
<publisher-name>North Holland</publisher-name>
<fpage>71</fpage>
<lpage>93</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Stratton1">
<label>38</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stratton</surname>
<given-names>VN</given-names>
</name>
<name>
<surname>Zalanowski</surname>
<given-names>AH</given-names>
</name>
</person-group>
<year>1994</year>
<article-title>Affective impact of music vs. lyrics.</article-title>
<source>Empirical Studies of the Arts</source>
<volume>12</volume>
<fpage>173</fpage>
<lpage>184</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Ali1">
<label>39</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ali</surname>
<given-names>SO</given-names>
</name>
<name>
<surname>Peynircioğlu</surname>
<given-names>ZF</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Songs and emotions: are lyrics and melodies equal partners?</article-title>
<source>Psychology of Music</source>
<volume>34</volume>
<fpage>511</fpage>
<lpage>534</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Besson2">
<label>40</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Faïta</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Bonnel</surname>
<given-names>A-M</given-names>
</name>
<name>
<surname>Requin</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Singing in the brain: Independence of Lyrics and Tunes.</article-title>
<source>Psychological Science</source>
<volume>9</volume>
<fpage>494</fpage>
<lpage>498</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Bonnel1">
<label>41</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bonnel</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Faïta</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Divided attention between lyrics and tunes of operatic songs: evidence for independent processing.</article-title>
<source>Perception and Psychophysics</source>
<volume>63</volume>
<fpage>1201</fpage>
<lpage>1213</lpage>
<pub-id pub-id-type="pmid">11766944</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-vanBesouw1">
<label>42</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Besouw</surname>
<given-names>RM</given-names>
</name>
<name>
<surname>Howard</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Ternstrom</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Towards an understanding of speech and song perception.</article-title>
<source>Logopedics Phoniatrics Vocology</source>
<volume>30</volume>
<fpage>129</fpage>
<lpage>135</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Kolinsky1">
<label>43</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kolinsky</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Lidji</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Morais</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Processing interactions between phonology and melody: Vowels sing but consonants speak.</article-title>
<source>Cognition</source>
<volume>112</volume>
<fpage>1</fpage>
<lpage>20</lpage>
<pub-id pub-id-type="pmid">19409537</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Bigand1">
<label>44</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Tillmann</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Poulin</surname>
<given-names>B</given-names>
</name>
<name>
<surname>D'Adamo</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Madurell</surname>
<given-names>F</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>The effect of harmonic context on phoneme monitoring in vocal music.</article-title>
<source>Cognition</source>
<volume>81</volume>
<fpage>11</fpage>
<lpage>20</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Thompson1">
<label>45</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thompson</surname>
<given-names>WF</given-names>
</name>
<name>
<surname>Russo</surname>
<given-names>FA</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>The attribution of emotion and meaning to song lyrics.</article-title>
<source>Polskie Forum Psychologiczne</source>
<volume>9</volume>
<fpage>51</fpage>
<lpage>62</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-PoulinCharronnat1">
<label>46</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Poulin-Charronnat</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Madurell</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Peereman</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Musical structure modulates semantic priming in vocal music.</article-title>
<source>Cognition</source>
<volume>94</volume>
<fpage>67</fpage>
<lpage>78</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Fedorenko1">
<label>47</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fedorenko</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Casasanto</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Winawer</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Gibson</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Structural integration in language and music: evidence for a shared system.</article-title>
<source>Memory & Cognition</source>
<volume>37</volume>
<fpage>1</fpage>
<lpage>9</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Garner1">
<label>48</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Garner</surname>
<given-names>WR</given-names>
</name>
<name>
<surname>Felfoldy</surname>
<given-names>GL</given-names>
</name>
</person-group>
<year>1970</year>
<article-title>Integrality of stimulus dimensions in various types of information processing.</article-title>
<source>Cognitive Psychology</source>
<volume>1</volume>
<fpage>225</fpage>
<lpage>241</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Lidji1">
<label>49</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lidji</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Jolicoeur</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Moreau</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Kolinsky</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Integrated preattentive processing of vowel and pitch: a mismatch negativity study.</article-title>
<source>Ann N Y Acad Sci</source>
<volume>1169</volume>
<fpage>481</fpage>
<lpage>484</lpage>
<pub-id pub-id-type="pmid">19673826</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Levy1">
<label>50</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levy</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Granot</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Bentin</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Processing specificity for human voice stimuli: electrophysiological evidence.</article-title>
<source>Neuroreport</source>
<volume>12</volume>
<fpage>2653</fpage>
<lpage>2657</lpage>
<pub-id pub-id-type="pmid">11522942</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Levy2">
<label>51</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levy</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Granot</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Bentin</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Neural sensitivity to human voices: ERP evidence of task and attentional influences.</article-title>
<source>Psychophysiology</source>
<volume>40</volume>
<fpage>291</fpage>
<lpage>305</lpage>
<pub-id pub-id-type="pmid">12820870</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Bigand2">
<label>52</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Poulin-Charronnat</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Are we “experienced listeners”? A review of the musical capacities that do not depend on formal musical training.</article-title>
<source>Cognition</source>
<volume>100</volume>
<fpage>100</fpage>
<lpage>130</lpage>
<pub-id pub-id-type="pmid">16412412</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Bentin1">
<label>53</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bentin</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kutas</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
</person-group>
<year>1993</year>
<article-title>Electrophysiological evidence for task effects on semantic priming in auditory word processing.</article-title>
<source>Psychophysiology</source>
<volume>30</volume>
<fpage>161</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="pmid">8434079</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Holcomb1">
<label>54</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Holcomb</surname>
<given-names>PJ</given-names>
</name>
<name>
<surname>Neville</surname>
<given-names>HJ</given-names>
</name>
</person-group>
<year>1990</year>
<article-title>Auditory and Visual Semantic Priming in Lexical Decision: A Comparison Using Event-Related Brain Potentials.</article-title>
<source>Language and Cognitive Processes</source>
<volume>5</volume>
<fpage>281</fpage>
<lpage>312</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Kutas1">
<label>55</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kutas</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
</person-group>
<year>1980</year>
<article-title>Reading senseless sentences: brain potentials reflect semantic incongruity.</article-title>
<source>Science</source>
<volume>207</volume>
<fpage>203</fpage>
<lpage>205</lpage>
<pub-id pub-id-type="pmid">7350657</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Kutas2">
<label>56</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kutas</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
</person-group>
<year>1984</year>
<article-title>Brain potentials during reading reflect word expectancy and semantic association.</article-title>
<source>Nature</source>
<volume>307</volume>
<fpage>161</fpage>
<lpage>163</lpage>
<pub-id pub-id-type="pmid">6690995</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-McCallum1">
<label>57</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McCallum</surname>
<given-names>WC</given-names>
</name>
<name>
<surname>Farmer</surname>
<given-names>SF</given-names>
</name>
<name>
<surname>Pocock</surname>
<given-names>PV</given-names>
</name>
</person-group>
<year>1984</year>
<article-title>The effects of physical and semantic incongruities on auditory event-related potentials.</article-title>
<source>Electroencephalography and Clinical Neurophysiology</source>
<volume>59</volume>
<fpage>477</fpage>
<lpage>488</lpage>
<pub-id pub-id-type="pmid">6209114</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Besson3">
<label>58</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>van Petten</surname>
<given-names>CV</given-names>
</name>
<name>
<surname>Kutas</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>1992</year>
<article-title>An Event-Related Potential (ERP) Analysis of Semantic Congruity and Repetition Effects in Sentences.</article-title>
<source>Journal of Cognitive Neuroscience</source>
<volume>4</volume>
<fpage>132</fpage>
<lpage>149</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Meyer1">
<label>59</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Schvaneveldt</surname>
<given-names>RW</given-names>
</name>
</person-group>
<year>1971</year>
<article-title>Facilitation in recognizing pairs of words: evidence of a dependence between retrieval operations.</article-title>
<source>Journal of Experimental Psychology</source>
<volume>90</volume>
<fpage>227</fpage>
<lpage>234</lpage>
<pub-id pub-id-type="pmid">5134329</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Neely1">
<label>60</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Neely</surname>
<given-names>JH</given-names>
</name>
</person-group>
<year>1977</year>
<article-title>Semantic Priming and Retrieval from Lexical Memory: Roles of Inhibitionless Spreading Activation and Limited-Capacity Attention.</article-title>
<source>Journal of Experimental Psychology: General</source>
<volume>106</volume>
<fpage>226</fpage>
<lpage>254</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Besson4">
<label>61</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Macar</surname>
<given-names>F</given-names>
</name>
</person-group>
<year>1987</year>
<article-title>An event-related potential analysis of incongruity in music and other non-linguistic contexts.</article-title>
<source>Psychophysiology</source>
<volume>24</volume>
<fpage>14</fpage>
<lpage>25</lpage>
<pub-id pub-id-type="pmid">3575590</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Besson5">
<label>62</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Faïta</surname>
<given-names>F</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>An Event-Related Potential study of musical expectancy: Comparison of musicians with non-musicians.</article-title>
<source>Journal of Experimental Psychology: Human Performance & Perception</source>
<volume>21</volume>
<fpage>1278</fpage>
<lpage>1296</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Paller1">
<label>63</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Paller</surname>
<given-names>KA</given-names>
</name>
<name>
<surname>McCarthy</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Wood</surname>
<given-names>CC</given-names>
</name>
</person-group>
<year>1992</year>
<article-title>Event-related potentials elicited by deviant endings to melodies.</article-title>
<source>Psychophysiology</source>
<volume>29</volume>
<fpage>202</fpage>
<lpage>206</lpage>
<pub-id pub-id-type="pmid">1635962</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Verleger1">
<label>64</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Verleger</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>1990</year>
<article-title>P3-evoking wrong notes: unexpected, awaited, or arousing?</article-title>
<source>The International Journal of Neuroscience</source>
<volume>55</volume>
<fpage>171</fpage>
<lpage>179</lpage>
<pub-id pub-id-type="pmid">2084050</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Tillmann1">
<label>65</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Janata</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Bharucha</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Activation of the inferior frontal cortex in musical priming.</article-title>
<source>Brain Research Cognitive Brain Research</source>
<volume>16</volume>
<fpage>145</fpage>
<lpage>161</lpage>
<pub-id pub-id-type="pmid">12668222</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Miranda1">
<label>66</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miranda</surname>
<given-names>RA</given-names>
</name>
<name>
<surname>Ullman</surname>
<given-names>MT</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Double dissociation between rules and memory in music: an Event-Related Potential study.</article-title>
<source>NeuroImage</source>
<volume>38</volume>
<fpage>331</fpage>
<lpage>345</lpage>
<pub-id pub-id-type="pmid">17855126</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Pachella1">
<label>67</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pachella</surname>
<given-names>RG</given-names>
</name>
<name>
<surname>Miller</surname>
<given-names>JO</given-names>
</name>
</person-group>
<year>1976</year>
<article-title>Stimulus probability and same-different classification.</article-title>
<source>Perception and Psychophysics</source>
<volume>19</volume>
<fpage>29</fpage>
<lpage>34</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Gregg1">
<label>68</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gregg</surname>
<given-names>MK</given-names>
</name>
<name>
<surname>Samuel</surname>
<given-names>AG</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>The importance of semantics in auditory representations.</article-title>
<source>Atten Percept Psychophys</source>
<volume>71</volume>
<fpage>607</fpage>
<lpage>619</lpage>
<pub-id pub-id-type="pmid">19304650</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Thomas1">
<label>69</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thomas</surname>
<given-names>RD</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Processing time predictions of current models of perception in the classic additive factors paradigm.</article-title>
<source>Journal of Mathematical Psychology</source>
<volume>50</volume>
<fpage>441</fpage>
<lpage>455</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Astsano1">
<label>70</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Astésano</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Rythme et accentuation en français. Invariance et variabilité stylistique.</article-title>
<publisher-loc>Paris</publisher-loc>
<publisher-name>L'Harmattan</publisher-name>
</mixed-citation>
</ref>
<ref id="pone.0009889-Nguyen1">
<label>71</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Nguyen</surname>
<given-names>N</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>La perception de la parole.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Nguyen</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Wauquier-Gravelines</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Durand</surname>
<given-names>J</given-names>
</name>
</person-group>
<source>Phonologie et Phonétique</source>
<publisher-loc>Paris</publisher-loc>
<publisher-name>Hermès</publisher-name>
<fpage>425</fpage>
<lpage>447</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-McCarthy1">
<label>72</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McCarthy</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Wood</surname>
<given-names>CC</given-names>
</name>
</person-group>
<year>1985</year>
<article-title>Scalp distributions of event-related potentials: an ambiguity associated with analysis of variance models.</article-title>
<source>Electroencephalogr Clin Neurophysiol</source>
<volume>62</volume>
<fpage>203</fpage>
<lpage>208</lpage>
<pub-id pub-id-type="pmid">2581760</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Urbach1">
<label>73</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Urbach</surname>
<given-names>TP</given-names>
</name>
<name>
<surname>Kutas</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>The intractability of scaling scalp distributions to infer neuroelectric sources.</article-title>
<source>Psychophysiology</source>
<volume>39</volume>
<fpage>791</fpage>
<lpage>808</lpage>
<pub-id pub-id-type="pmid">12462507</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Perrin1">
<label>74</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Perrin</surname>
<given-names>F</given-names>
</name>
<name>
<surname>García-Larrea</surname>
<given-names>L</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Modulation of the N400 potential during auditory phonological/semantic interaction.</article-title>
<source>Brain Res Cogn Brain Res</source>
<volume>17</volume>
<fpage>36</fpage>
<lpage>47</lpage>
<pub-id pub-id-type="pmid">12763190</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Relander1">
<label>75</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Relander</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Rämä</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Kujala</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Word Semantics Is Processed Even without Attentional Effort.</article-title>
<source>J Cogn Neurosci</source>
<volume>21</volume>
<fpage>1511</fpage>
<lpage>1522</lpage>
<pub-id pub-id-type="pmid">18823236</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Astsano2">
<label>76</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Astésano</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Alter</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Brain potentials during semantic and prosodic processing in French.</article-title>
<source>Brain Research Cognitive Brain Research</source>
<volume>18</volume>
<fpage>172</fpage>
<lpage>184</lpage>
<pub-id pub-id-type="pmid">14736576</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Hohlfeld1">
<label>77</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hohlfeld</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Sommer</surname>
<given-names>W</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Semantic processing of unattended meaning is modulated by additional task load: evidence from electrophysiology.</article-title>
<source>Brain Res Cogn Brain Res</source>
<volume>24</volume>
<fpage>500</fpage>
<lpage>512</lpage>
<pub-id pub-id-type="pmid">16099362</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Magne1">
<label>78</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Magne</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Astésano</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Aramaki</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ystad</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kronland-Martinet</surname>
<given-names>R</given-names>
</name>
<etal></etal>
</person-group>
<year>2007</year>
<article-title>Influence of syllabic lengthening on semantic processing in spoken French: behavioral and electrophysiological evidence.</article-title>
<source>Cerebral Cortex</source>
<volume>17</volume>
<fpage>2659</fpage>
<lpage>2668</lpage>
<pub-id pub-id-type="pmid">17264253</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Ibez1">
<label>79</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ibáñez</surname>
<given-names>A</given-names>
</name>
<name>
<surname>López</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Cornejo</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>ERPs and contextual semantic discrimination: degrees of congruence in wakefulness and sleep.</article-title>
<source>Brain Lang</source>
<volume>98</volume>
<fpage>264</fpage>
<lpage>275</lpage>
<pub-id pub-id-type="pmid">16782185</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Digeser1">
<label>80</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Digeser</surname>
<given-names>FM</given-names>
</name>
<name>
<surname>Wohlberedt</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Hoppe</surname>
<given-names>U</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Contribution of spectrotemporal features on auditory event-related potentials elicited by consonant-vowel syllables.</article-title>
<source>Ear Hear</source>
<volume>30</volume>
<fpage>704</fpage>
<lpage>712</lpage>
<pub-id pub-id-type="pmid">19672195</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Nguyen2">
<label>81</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nguyen</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Fagyal</surname>
<given-names>Z</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Acoustic aspects of vowel harmony in French.</article-title>
<source>Journal of Phonetics</source>
<volume>36</volume>
<fpage>1</fpage>
<lpage>27</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Hagoort1">
<label>82</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hagoort</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>CM</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>ERP effects of listening to speech: semantic ERP effects.</article-title>
<source>Neuropsychologia</source>
<volume>38</volume>
<fpage>1518</fpage>
<lpage>1530</lpage>
<pub-id pub-id-type="pmid">10906377</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-VanPetten1">
<label>83</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Petten</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Coulson</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Rubin</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Plante</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Parks</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Time course of word identification and semantic integration in spoken language.</article-title>
<source>J Exp Psychol Learn Mem Cogn</source>
<volume>25</volume>
<fpage>394</fpage>
<lpage>417</lpage>
<pub-id pub-id-type="pmid">10093207</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Janata1">
<label>84</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Janata</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>ERP Measures Assay the Degree of Expectancy Violation of Harmonic Contexts in Music.</article-title>
<source>J Cogn Neurosci</source>
<volume>7</volume>
<fpage>153</fpage>
<lpage>164</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Kutas3">
<label>85</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kutas</surname>
<given-names>M</given-names>
</name>
<name>
<surname>McCarthy</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Donchin</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>1977</year>
<article-title>Augmenting mental chronometry: the P300 as a measure of stimulus evaluation time.</article-title>
<source>Science</source>
<volume>197</volume>
<fpage>792</fpage>
<lpage>795</lpage>
<pub-id pub-id-type="pmid">887923</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Johnson1">
<label>86</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnson</surname>
<given-names>R</given-names>
<suffix>Jr</suffix>
</name>
<name>
<surname>Donchin</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>1978</year>
<article-title>On how P300 amplitude varies with the utility of the eliciting stimuli.</article-title>
<source>Electroencephalography and Clinical Neurophysiology</source>
<volume>44</volume>
<fpage>424</fpage>
<lpage>437</lpage>
<pub-id pub-id-type="pmid">76551</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Donchin1">
<label>87</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Donchin</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>1981</year>
<article-title>Presidential address, 1980. Surprise!…Surprise?</article-title>
<source>Psychophysiology</source>
<volume>18</volume>
<fpage>493</fpage>
<lpage>513</lpage>
<pub-id pub-id-type="pmid">7280146</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Polich1">
<label>88</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Polich</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Updating P300: an integrative theory of P3a and P3b.</article-title>
<source>Clin Neurophysiol</source>
<volume>118</volume>
<fpage>2128</fpage>
<lpage>2148</lpage>
<pub-id pub-id-type="pmid">17573239</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Carrion1">
<label>89</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carrion</surname>
<given-names>RE</given-names>
</name>
<name>
<surname>Bly</surname>
<given-names>BM</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>The effects of learning on event-related potential correlates of musical expectancy.</article-title>
<source>Psychophysiology</source>
<volume>45</volume>
<fpage>759</fpage>
<lpage>775</lpage>
<pub-id pub-id-type="pmid">18665861</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Cunningham1">
<label>90</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cunningham</surname>
<given-names>WA</given-names>
</name>
<name>
<surname>Espinet</surname>
<given-names>SD</given-names>
</name>
<name>
<surname>DeYoung</surname>
<given-names>CG</given-names>
</name>
<name>
<surname>Zelazo</surname>
<given-names>PD</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Attitudes to the right- and left: frontal ERP asymmetries associated with stimulus valence and processing goals.</article-title>
<source>NeuroImage</source>
<volume>28</volume>
<fpage>827</fpage>
<lpage>834</lpage>
<pub-id pub-id-type="pmid">16039143</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Pastor1">
<label>91</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pastor</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>Bradley</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Low</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Versace</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Molto</surname>
<given-names>J</given-names>
</name>
<etal></etal>
</person-group>
<year>2008</year>
<article-title>Affective picture perception: emotion, context, and the late positive potential.</article-title>
<source>Brain Res</source>
<volume>1189</volume>
<fpage>145</fpage>
<lpage>151</lpage>
<pub-id pub-id-type="pmid">18068150</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Spreckelmeyer1">
<label>92</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spreckelmeyer</surname>
<given-names>KN</given-names>
</name>
<name>
<surname>Kutas</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Urbach</surname>
<given-names>TP</given-names>
</name>
<name>
<surname>Altenmüller</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Münte</surname>
<given-names>TF</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Combined perception of emotion in pictures and musical sounds.</article-title>
<source>Brain Res</source>
<volume>1070</volume>
<fpage>160</fpage>
<lpage>170</lpage>
<pub-id pub-id-type="pmid">16403462</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Serafine2">
<label>93</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Serafine</surname>
<given-names>ML</given-names>
</name>
<name>
<surname>Davidson</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Crowder</surname>
<given-names>RG</given-names>
</name>
<name>
<surname>Repp</surname>
<given-names>BH</given-names>
</name>
</person-group>
<year>1986</year>
<article-title>On the Nature of Melody-Text Integration in Memory for Songs.</article-title>
<source>Journal of Memory and Language</source>
<volume>25</volume>
<fpage>123</fpage>
<lpage>135</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Mietz1">
<label>94</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mietz</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Toepel</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Ischebeck</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Alter</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Inadequate and infrequent are not alike: ERPs to deviant prosodic patterns in spoken sentence comprehension.</article-title>
<source>Brain Lang</source>
<volume>104</volume>
<fpage>159</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="pmid">17428526</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-SchmidtKassow1">
<label>95</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schmidt-Kassow</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>SA</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Event-related brain potentials suggest a late interaction of meter and syntax in the P600.</article-title>
<source>J Cogn Neurosci</source>
<volume>21</volume>
<fpage>1693</fpage>
<lpage>1708</lpage>
<pub-id pub-id-type="pmid">18855546</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Lau1">
<label>96</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lau</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Almeida</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Hines</surname>
<given-names>PC</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>A lexical basis for N400 context effects: evidence from MEG.</article-title>
<source>Brain Lang</source>
<volume>111</volume>
<fpage>161</fpage>
<lpage>172</lpage>
<pub-id pub-id-type="pmid">19815267</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Kutas4">
<label>97</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kutas</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Federmeier</surname>
<given-names>KD</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Electrophysiology reveals semantic memory use in language comprehension.</article-title>
<source>Trends Cogn Sci</source>
<volume>4</volume>
<fpage>463</fpage>
<lpage>470</lpage>
<pub-id pub-id-type="pmid">11115760</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Aramaki1">
<label>98</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Aramaki</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Marie</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Kronland-Martinet</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Ystad</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>(in press) Sound Categorization and Conceptual Priming for Nonlinguistic and Linguistic Sounds.</article-title>
<source>Journal of Cognitive Neuroscience</source>
</mixed-citation>
</ref>
<ref id="pone.0009889-Slevc1">
<label>99</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slevc</surname>
<given-names>LR</given-names>
</name>
<name>
<surname>Rosenberg</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>AD</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Making psycholinguistics musical: self-paced reading time evidence for shared processing of linguistic and musical syntax.</article-title>
<source>Psychon Bull Rev</source>
<volume>16</volume>
<fpage>374</fpage>
<lpage>381</lpage>
<pub-id pub-id-type="pmid">19293110</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009889-Schn3">
<label>100</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schön</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Gordon</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Campagne</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Magne</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Astésano</surname>
<given-names>C</given-names>
</name>
<etal></etal>
</person-group>
<article-title>(in press) Similar cerebral networks in language, music, and song perception.</article-title>
<source>NeuroImage</source>
</mixed-citation>
</ref>
<ref id="pone.0009889-Dissanayake1">
<label>101</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dissanayake</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>If music is the food of love, what about survival and reproductive success.</article-title>
<source>Musicae Scientiae Special Issue</source>
<fpage>169</fpage>
<lpage>195</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Bergeson1">
<label>102</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bergeson</surname>
<given-names>TR</given-names>
</name>
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Mothers' Singing to Infants and Preschool Children.</article-title>
<source>Infant Behavior & Development</source>
<volume>22</volume>
<fpage>51</fpage>
<lpage>64</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009889-Norton1">
<label>103</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Norton</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Zipse</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Marchina</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Schlaug</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Melodic Intonation Therapy: shared insights on how it is done and why it might help.</article-title>
<source>Ann N Y Acad Sci</source>
<volume>1169</volume>
<fpage>431</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="pmid">19673819</pub-id>
</mixed-citation>
</ref>
</ref-list>
<fn-group>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>
<bold>Funding: </bold>
This research was supported by a grant from the Human Frontier Science Program “An interdisciplinary approach to the problem of language and music specificity” (HFSP#RGP0053) to M. Besson and was conducted at the Institut de Neurosciences Cognitives de la Méditerranée, while R.L. Gordon was a graduate student. D. Schön and C. Astésano were supported by the HFSP grant; C. Magne benefitted from a “Cognitive Science” Fellowship from the French Ministry of Research; and R.L. Gordon benefited from a Fellowship from the American Academy of University Women. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</fn>
</fn-group>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>France</li>
<li>États-Unis</li>
</country>
<region>
<li>Floride</li>
<li>Provence-Alpes-Côte d'Azur</li>
<li>Tennessee</li>
</region>
<settlement>
<li>Marseille</li>
<li>Toulouse</li>
</settlement>
<orgName>
<li>Université de la Méditerranée</li>
</orgName>
</list>
<tree>
<country name="États-Unis">
<region name="Floride">
<name sortKey="Gordon, Reyna L" sort="Gordon, Reyna L" uniqKey="Gordon R" first="Reyna L." last="Gordon">Reyna L. Gordon</name>
</region>
<name sortKey="Magne, Cyrille" sort="Magne, Cyrille" uniqKey="Magne C" first="Cyrille" last="Magne">Cyrille Magne</name>
</country>
<country name="France">
<noRegion>
<name sortKey="Gordon, Reyna L" sort="Gordon, Reyna L" uniqKey="Gordon R" first="Reyna L." last="Gordon">Reyna L. Gordon</name>
</noRegion>
<name sortKey="Astesano, Corine" sort="Astesano, Corine" uniqKey="Astesano C" first="Corine" last="Astésano">Corine Astésano</name>
<name sortKey="Astesano, Corine" sort="Astesano, Corine" uniqKey="Astesano C" first="Corine" last="Astésano">Corine Astésano</name>
<name sortKey="Besson, Mireille" sort="Besson, Mireille" uniqKey="Besson M" first="Mireille" last="Besson">Mireille Besson</name>
<name sortKey="Schon, Daniele" sort="Schon, Daniele" uniqKey="Schon D" first="Daniele" last="Schön">Daniele Schön</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/OperaV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000688 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 000688 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    OperaV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:2847603
   |texte=   Words and Melody Are Intertwined in Perception of Sung Words: EEG and Behavioral Evidence
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:20360991" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a OperaV1 

Wicri

This area was generated with Dilib version V0.6.21.
Data generation: Thu Apr 14 14:59:05 2016. Site generation: Thu Jan 4 23:09:23 2024