Serveur d'exploration sur l'opéra

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Intonation processing deficits of emotional words among Mandarin Chinese speakers with congenital amusia: an ERP study

Identifieur interne : 000E34 ( Pmc/Curation ); précédent : 000E33; suivant : 000E35

Intonation processing deficits of emotional words among Mandarin Chinese speakers with congenital amusia: an ERP study

Auteurs : Xuejing Lu [Australie, République populaire de Chine] ; Hao Tam Ho [Australie] ; Fang Liu [Royaume-Uni] ; Daxing Wu [République populaire de Chine] ; William F. Thompson [Australie]

Source :

RBID : PMC:4391227

Abstract

Background: Congenital amusia is a disorder that is known to affect the processing of musical pitch. Although individuals with amusia rarely show language deficits in daily life, a number of findings point to possible impairments in speech prosody that amusic individuals may compensate for by drawing on linguistic information. Using EEG, we investigated (1) whether the processing of speech prosody is impaired in amusia and (2) whether emotional linguistic information can compensate for this impairment.

Method: Twenty Chinese amusics and 22 matched controls were presented pairs of emotional words spoken with either statement or question intonation while their EEG was recorded. Their task was to judge whether the intonations were the same.

Results: Amusics exhibited impaired performance on the intonation-matching task for emotional linguistic information, as their performance was significantly worse than that of controls. EEG results showed a reduced N2 response to incongruent intonation pairs in amusics compared with controls, which likely reflects impaired conflict processing in amusia. However, our EEG results also indicated that amusics were intact in early sensory auditory processing, as revealed by a comparable N1 modulation in both groups.

Conclusion: We propose that the impairment in discriminating speech intonation observed among amusic individuals may arise from an inability to access information extracted at early processing stages. This, in turn, could reflect a disconnection between low-level and high-level processing.


Url:
DOI: 10.3389/fpsyg.2015.00385
PubMed: 25914659
PubMed Central: 4391227

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4391227

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Intonation processing deficits of emotional words among Mandarin Chinese speakers with congenital amusia: an ERP study</title>
<author>
<name sortKey="Lu, Xuejing" sort="Lu, Xuejing" uniqKey="Lu X" first="Xuejing" last="Lu">Xuejing Lu</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Macquarie University</institution>
<country>Sydney, NSW, Australia</country>
</nlm:aff>
<country xml:lang="fr">Australie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Medical Psychological Institute, The Second Xiangya Hospital, Central South University</institution>
<country>Changsha, China</country>
</nlm:aff>
<country xml:lang="fr">République populaire de Chine</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Ho, Hao Tam" sort="Ho, Hao Tam" uniqKey="Ho H" first="Hao Tam" last="Ho">Hao Tam Ho</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Macquarie University</institution>
<country>Sydney, NSW, Australia</country>
</nlm:aff>
<country xml:lang="fr">Australie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Liu, Fang" sort="Liu, Fang" uniqKey="Liu F" first="Fang" last="Liu">Fang Liu</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Speech, Hearing and Phonetic Sciences, University College London</institution>
<country>London, UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Wu, Daxing" sort="Wu, Daxing" uniqKey="Wu D" first="Daxing" last="Wu">Daxing Wu</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Medical Psychological Institute, The Second Xiangya Hospital, Central South University</institution>
<country>Changsha, China</country>
</nlm:aff>
<country xml:lang="fr">République populaire de Chine</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Thompson, William F" sort="Thompson, William F" uniqKey="Thompson W" first="William F." last="Thompson">William F. Thompson</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Macquarie University</institution>
<country>Sydney, NSW, Australia</country>
</nlm:aff>
<country xml:lang="fr">Australie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25914659</idno>
<idno type="pmc">4391227</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4391227</idno>
<idno type="RBID">PMC:4391227</idno>
<idno type="doi">10.3389/fpsyg.2015.00385</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000E34</idno>
<idno type="wicri:Area/Pmc/Curation">000E34</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Intonation processing deficits of emotional words among Mandarin Chinese speakers with congenital amusia: an ERP study</title>
<author>
<name sortKey="Lu, Xuejing" sort="Lu, Xuejing" uniqKey="Lu X" first="Xuejing" last="Lu">Xuejing Lu</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Macquarie University</institution>
<country>Sydney, NSW, Australia</country>
</nlm:aff>
<country xml:lang="fr">Australie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Medical Psychological Institute, The Second Xiangya Hospital, Central South University</institution>
<country>Changsha, China</country>
</nlm:aff>
<country xml:lang="fr">République populaire de Chine</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Ho, Hao Tam" sort="Ho, Hao Tam" uniqKey="Ho H" first="Hao Tam" last="Ho">Hao Tam Ho</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Macquarie University</institution>
<country>Sydney, NSW, Australia</country>
</nlm:aff>
<country xml:lang="fr">Australie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Liu, Fang" sort="Liu, Fang" uniqKey="Liu F" first="Fang" last="Liu">Fang Liu</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Speech, Hearing and Phonetic Sciences, University College London</institution>
<country>London, UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Wu, Daxing" sort="Wu, Daxing" uniqKey="Wu D" first="Daxing" last="Wu">Daxing Wu</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Medical Psychological Institute, The Second Xiangya Hospital, Central South University</institution>
<country>Changsha, China</country>
</nlm:aff>
<country xml:lang="fr">République populaire de Chine</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Thompson, William F" sort="Thompson, William F" uniqKey="Thompson W" first="William F." last="Thompson">William F. Thompson</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Macquarie University</institution>
<country>Sydney, NSW, Australia</country>
</nlm:aff>
<country xml:lang="fr">Australie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="e-ISSN">1664-1078</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>
<bold>Background:</bold>
Congenital amusia is a disorder that is known to affect the processing of musical pitch. Although individuals with amusia rarely show language deficits in daily life, a number of findings point to possible impairments in speech prosody that amusic individuals may compensate for by drawing on linguistic information. Using EEG, we investigated (1) whether the processing of speech prosody is impaired in amusia and (2) whether emotional linguistic information can compensate for this impairment.</p>
<p>
<bold>Method:</bold>
Twenty Chinese amusics and 22 matched controls were presented pairs of emotional words spoken with either statement or question intonation while their EEG was recorded. Their task was to judge whether the intonations were the same.</p>
<p>
<bold>Results:</bold>
Amusics exhibited impaired performance on the intonation-matching task for emotional linguistic information, as their performance was significantly worse than that of controls. EEG results showed a reduced N2 response to incongruent intonation pairs in amusics compared with controls, which likely reflects impaired conflict processing in amusia. However, our EEG results also indicated that amusics were intact in early sensory auditory processing, as revealed by a comparable N1 modulation in both groups.</p>
<p>
<bold>Conclusion:</bold>
We propose that the impairment in discriminating speech intonation observed among amusic individuals may arise from an inability to access information extracted at early processing stages. This, in turn, could reflect a disconnection between low-level and high-level processing.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alain, C" uniqKey="Alain C">C. Alain</name>
</author>
<author>
<name sortKey="Mcneely, H E" uniqKey="Mcneely H">H. E. McNeely</name>
</author>
<author>
<name sortKey="He, Y" uniqKey="He Y">Y. He</name>
</author>
<author>
<name sortKey="Christensen, B K" uniqKey="Christensen B">B. K. Christensen</name>
</author>
<author>
<name sortKey="West, R" uniqKey="West R">R. West</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Albouy, P" uniqKey="Albouy P">P. Albouy</name>
</author>
<author>
<name sortKey="Mattout, J" uniqKey="Mattout J">J. Mattout</name>
</author>
<author>
<name sortKey="Bouet, R" uniqKey="Bouet R">R. Bouet</name>
</author>
<author>
<name sortKey="Maby, E" uniqKey="Maby E">E. Maby</name>
</author>
<author>
<name sortKey="Sanchez, G" uniqKey="Sanchez G">G. Sanchez</name>
</author>
<author>
<name sortKey="Aguera, P E" uniqKey="Aguera P">P. E. Aguera</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alho, K" uniqKey="Alho K">K. Alho</name>
</author>
<author>
<name sortKey="Teder, W" uniqKey="Teder W">W. Teder</name>
</author>
<author>
<name sortKey="Lavikainen, J" uniqKey="Lavikainen J">J. Lavikainen</name>
</author>
<author>
<name sortKey="N T Nen, R" uniqKey="N T Nen R">R. Näätänen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Anvari, S H" uniqKey="Anvari S">S. H. Anvari</name>
</author>
<author>
<name sortKey="Trainor, L J" uniqKey="Trainor L">L. J. Trainor</name>
</author>
<author>
<name sortKey="Woodside, J" uniqKey="Woodside J">J. Woodside</name>
</author>
<author>
<name sortKey="Levy, B A" uniqKey="Levy B">B. A. Levy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Astheimer, L B" uniqKey="Astheimer L">L. B. Astheimer</name>
</author>
<author>
<name sortKey="Sanders, L D" uniqKey="Sanders L">L. D. Sanders</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ayotte, J" uniqKey="Ayotte J">J. Ayotte</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Hyde, K" uniqKey="Hyde K">K. Hyde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Botvinick, M" uniqKey="Botvinick M">M. Botvinick</name>
</author>
<author>
<name sortKey="Braver, T S" uniqKey="Braver T">T. S. Braver</name>
</author>
<author>
<name sortKey="Barch, D M" uniqKey="Barch D">D. M. Barch</name>
</author>
<author>
<name sortKey="Carter, C S" uniqKey="Carter C">C. S. Carter</name>
</author>
<author>
<name sortKey="Cohen, J D" uniqKey="Cohen J">J. D. Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Botvinick, M M" uniqKey="Botvinick M">M. M. Botvinick</name>
</author>
<author>
<name sortKey="Cohen, J D" uniqKey="Cohen J">J. D. Cohen</name>
</author>
<author>
<name sortKey="Carter, C S" uniqKey="Carter C">C. S. Carter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Botvinick, M" uniqKey="Botvinick M">M. Botvinick</name>
</author>
<author>
<name sortKey="Nystrom, L E" uniqKey="Nystrom L">L. E. Nystrom</name>
</author>
<author>
<name sortKey="Fissell, K" uniqKey="Fissell K">K. Fissell</name>
</author>
<author>
<name sortKey="Carter, C S" uniqKey="Carter C">C. S. Carter</name>
</author>
<author>
<name sortKey="Cohen, J D" uniqKey="Cohen J">J. D. Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carter, C S" uniqKey="Carter C">C. S. Carter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Compton, R J" uniqKey="Compton R">R. J. Compton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dehaene, S" uniqKey="Dehaene S">S. Dehaene</name>
</author>
<author>
<name sortKey="Artiges, E" uniqKey="Artiges E">E. Artiges</name>
</author>
<author>
<name sortKey="Naccache, L" uniqKey="Naccache L">L. Naccache</name>
</author>
<author>
<name sortKey="Martelli, C" uniqKey="Martelli C">C. Martelli</name>
</author>
<author>
<name sortKey="Viard, A" uniqKey="Viard A">A. Viard</name>
</author>
<author>
<name sortKey="Schurhoff, F" uniqKey="Schurhoff F">F. Schurhoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dehaene, S" uniqKey="Dehaene S">S. Dehaene</name>
</author>
<author>
<name sortKey="Changeux, J P" uniqKey="Changeux J">J. P. Changeux</name>
</author>
<author>
<name sortKey="Naccache, L" uniqKey="Naccache L">L. Naccache</name>
</author>
<author>
<name sortKey="Sackur, J" uniqKey="Sackur J">J. Sackur</name>
</author>
<author>
<name sortKey="Sergent, C" uniqKey="Sergent C">C. Sergent</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Delorme, A" uniqKey="Delorme A">A. Delorme</name>
</author>
<author>
<name sortKey="Makeig, S" uniqKey="Makeig S">S. Makeig</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Folstein, J R" uniqKey="Folstein J">J. R. Folstein</name>
</author>
<author>
<name sortKey="Van Petten, C" uniqKey="Van Petten C">C. Van Petten</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fox, E" uniqKey="Fox E">E. Fox</name>
</author>
<author>
<name sortKey="Lester, V" uniqKey="Lester V">V. Lester</name>
</author>
<author>
<name sortKey="Russo, R" uniqKey="Russo R">R. Russo</name>
</author>
<author>
<name sortKey="Bowles, R J" uniqKey="Bowles R">R. J. Bowles</name>
</author>
<author>
<name sortKey="Pichler, A" uniqKey="Pichler A">A. Pichler</name>
</author>
<author>
<name sortKey="Dutton, K" uniqKey="Dutton K">K. Dutton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Giard, M H" uniqKey="Giard M">M. H. Giard</name>
</author>
<author>
<name sortKey="Perrin, F" uniqKey="Perrin F">F. Perrin</name>
</author>
<author>
<name sortKey="Echallier, J F" uniqKey="Echallier J">J. F. Echallier</name>
</author>
<author>
<name sortKey="Thevenet, M" uniqKey="Thevenet M">M. Thevenet</name>
</author>
<author>
<name sortKey="Froment, J C" uniqKey="Froment J">J. C. Froment</name>
</author>
<author>
<name sortKey="Pernier, J" uniqKey="Pernier J">J. Pernier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hansen, C H" uniqKey="Hansen C">C. H. Hansen</name>
</author>
<author>
<name sortKey="Hansen, R D" uniqKey="Hansen R">R. D. Hansen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hutchins, S" uniqKey="Hutchins S">S. Hutchins</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N. Gosselin</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jiang, C" uniqKey="Jiang C">C. Jiang</name>
</author>
<author>
<name sortKey="Hamm, J P" uniqKey="Hamm J">J. P. Hamm</name>
</author>
<author>
<name sortKey="Lim, V K" uniqKey="Lim V">V. K. Lim</name>
</author>
<author>
<name sortKey="Kirk, I J" uniqKey="Kirk I">I. J. Kirk</name>
</author>
<author>
<name sortKey="Yang, Y" uniqKey="Yang Y">Y. Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jiang, C" uniqKey="Jiang C">C. Jiang</name>
</author>
<author>
<name sortKey="Hamm, J P" uniqKey="Hamm J">J. P. Hamm</name>
</author>
<author>
<name sortKey="Lim, V K" uniqKey="Lim V">V. K. Lim</name>
</author>
<author>
<name sortKey="Kirk, I J" uniqKey="Kirk I">I. J. Kirk</name>
</author>
<author>
<name sortKey="Yang, Y" uniqKey="Yang Y">Y. Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kanske, P" uniqKey="Kanske P">P. Kanske</name>
</author>
<author>
<name sortKey="Kotz, S A" uniqKey="Kotz S">S. A. Kotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kanske, P" uniqKey="Kanske P">P. Kanske</name>
</author>
<author>
<name sortKey="Kotz, S A" uniqKey="Kotz S">S. A. Kotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kanske, P" uniqKey="Kanske P">P. Kanske</name>
</author>
<author>
<name sortKey="Kotz, S A" uniqKey="Kotz S">S. A. Kotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kanske, P" uniqKey="Kanske P">P. Kanske</name>
</author>
<author>
<name sortKey="Kotz, S A" uniqKey="Kotz S">S. A. Kotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kerns, J G" uniqKey="Kerns J">J. G. Kerns</name>
</author>
<author>
<name sortKey="Cohen, J D" uniqKey="Cohen J">J. D. Cohen</name>
</author>
<author>
<name sortKey="Macdonald, A W" uniqKey="Macdonald A">A. W. MacDonald</name>
</author>
<author>
<name sortKey="Cho, R Y" uniqKey="Cho R">R. Y. Cho</name>
</author>
<author>
<name sortKey="Stenger, V A" uniqKey="Stenger V">V. A. Stenger</name>
</author>
<author>
<name sortKey="Carter, C S" uniqKey="Carter C">C. S. Carter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kerns, J G" uniqKey="Kerns J">J. G. Kerns</name>
</author>
<author>
<name sortKey="Cohen, J D" uniqKey="Cohen J">J. D. Cohen</name>
</author>
<author>
<name sortKey="Macdonald, I I I A W" uniqKey="Macdonald I">I. I. I., A. W. MacDonald</name>
</author>
<author>
<name sortKey="Johnson, M K" uniqKey="Johnson M">M. K. Johnson</name>
</author>
<author>
<name sortKey="Stenger, V A" uniqKey="Stenger V">V. A. Stenger</name>
</author>
<author>
<name sortKey="Aizenstein, H" uniqKey="Aizenstein H">H. Aizenstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kissler, J" uniqKey="Kissler J">J. Kissler</name>
</author>
<author>
<name sortKey="Herbert, C" uniqKey="Herbert C">C. Herbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kutas, M" uniqKey="Kutas M">M. Kutas</name>
</author>
<author>
<name sortKey="Federmeier, K D" uniqKey="Federmeier K">K. D. Federmeier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, C Y" uniqKey="Lee C">C. Y. Lee</name>
</author>
<author>
<name sortKey="Hung, T H" uniqKey="Hung T">T. H. Hung</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, F" uniqKey="Liu F">F. Liu</name>
</author>
<author>
<name sortKey="Jiang, C" uniqKey="Jiang C">C. Jiang</name>
</author>
<author>
<name sortKey="Thompson, W F" uniqKey="Thompson W">W. F. Thompson</name>
</author>
<author>
<name sortKey="Xu, Y" uniqKey="Xu Y">Y. Xu</name>
</author>
<author>
<name sortKey="Yang, Y" uniqKey="Yang Y">Y. Yang</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, F" uniqKey="Liu F">F. Liu</name>
</author>
<author>
<name sortKey="Maggu, A R" uniqKey="Maggu A">A. R. Maggu</name>
</author>
<author>
<name sortKey="Lau, J C Y" uniqKey="Lau J">J. C. Y. Lau</name>
</author>
<author>
<name sortKey="Wong, P C M" uniqKey="Wong P">P. C. M. Wong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, F" uniqKey="Liu F">F. Liu</name>
</author>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Fourcin, A" uniqKey="Fourcin A">A. Fourcin</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Luck, S" uniqKey="Luck S">S. Luck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macmillan, N A" uniqKey="Macmillan N">N. A. Macmillan</name>
</author>
<author>
<name sortKey="Creelman, C D" uniqKey="Creelman C">C. D. Creelman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macmillan, N A" uniqKey="Macmillan N">N. A. Macmillan</name>
</author>
<author>
<name sortKey="Kaplan, H L" uniqKey="Kaplan H">H. L. Kaplan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mognon, A" uniqKey="Mognon A">A. Mognon</name>
</author>
<author>
<name sortKey="Jovicich, J" uniqKey="Jovicich J">J. Jovicich</name>
</author>
<author>
<name sortKey="Bruzzone, L" uniqKey="Bruzzone L">L. Bruzzone</name>
</author>
<author>
<name sortKey="Buiatti, M" uniqKey="Buiatti M">M. Buiatti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moreau, P" uniqKey="Moreau P">P. Moreau</name>
</author>
<author>
<name sortKey="Jolic Eur, P" uniqKey="Jolic Eur P">P. Jolicøeur</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morris, J S" uniqKey="Morris J">J. S. Morris</name>
</author>
<author>
<name sortKey="Ohman, A" uniqKey="Ohman A">A. Öhman</name>
</author>
<author>
<name sortKey="Dolan, R J" uniqKey="Dolan R">R. J. Dolan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Musacchia, G" uniqKey="Musacchia G">G. Musacchia</name>
</author>
<author>
<name sortKey="Sams, M" uniqKey="Sams M">M. Sams</name>
</author>
<author>
<name sortKey="Skoe, E" uniqKey="Skoe E">E. Skoe</name>
</author>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N. Kraus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="N T Nen, R" uniqKey="N T Nen R">R. Näätänen</name>
</author>
<author>
<name sortKey="Picton, T" uniqKey="Picton T">T. Picton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nan, Y" uniqKey="Nan Y">Y. Nan</name>
</author>
<author>
<name sortKey="Sun, Y" uniqKey="Sun Y">Y. Sun</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nguyen, S" uniqKey="Nguyen S">S. Nguyen</name>
</author>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N. Gosselin</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nieuwenhuis, S" uniqKey="Nieuwenhuis S">S. Nieuwenhuis</name>
</author>
<author>
<name sortKey="Yeung, N" uniqKey="Yeung N">N. Yeung</name>
</author>
<author>
<name sortKey="Van Den Wildenberg, W" uniqKey="Van Den Wildenberg W">W. Van Den Wildenberg</name>
</author>
<author>
<name sortKey="Ridderinkhof, K R" uniqKey="Ridderinkhof K">K. R. Ridderinkhof</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ohman, A" uniqKey="Ohman A">A. Öhman</name>
</author>
<author>
<name sortKey="Lundqvist, D" uniqKey="Lundqvist D">D. Lundqvist</name>
</author>
<author>
<name sortKey="Esteves, F" uniqKey="Esteves F">F. Esteves</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ortigue, S" uniqKey="Ortigue S">S. Ortigue</name>
</author>
<author>
<name sortKey="Michel, C M" uniqKey="Michel C">C. M. Michel</name>
</author>
<author>
<name sortKey="Murray, M M" uniqKey="Murray M">M. M. Murray</name>
</author>
<author>
<name sortKey="Mohr, C" uniqKey="Mohr C">C. Mohr</name>
</author>
<author>
<name sortKey="Carbonnel, S" uniqKey="Carbonnel S">S. Carbonnel</name>
</author>
<author>
<name sortKey="Landis, T" uniqKey="Landis T">T. Landis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palazova, M" uniqKey="Palazova M">M. Palazova</name>
</author>
<author>
<name sortKey="Mantwill, K" uniqKey="Mantwill K">K. Mantwill</name>
</author>
<author>
<name sortKey="Sommer, W" uniqKey="Sommer W">W. Sommer</name>
</author>
<author>
<name sortKey="Schacht, A" uniqKey="Schacht A">A. Schacht</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Foxton, J M" uniqKey="Foxton J">J. M. Foxton</name>
</author>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Tramo, M" uniqKey="Tramo M">M. Tramo</name>
</author>
<author>
<name sortKey="Labrecque, R" uniqKey="Labrecque R">R. Labrecque</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Wong, M" uniqKey="Wong M">M. Wong</name>
</author>
<author>
<name sortKey="Foxton, J" uniqKey="Foxton J">J. Foxton</name>
</author>
<author>
<name sortKey="Lochy, A" uniqKey="Lochy A">A. Lochy</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Ayotte, J" uniqKey="Ayotte J">J. Ayotte</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Mehler, J" uniqKey="Mehler J">J. Mehler</name>
</author>
<author>
<name sortKey="Ahad, P" uniqKey="Ahad P">P. Ahad</name>
</author>
<author>
<name sortKey="Penhune, V B" uniqKey="Penhune V">V. B. Penhune</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E. Brattico</name>
</author>
<author>
<name sortKey="Jarvenpaa, M" uniqKey="Jarvenpaa M">M. Jarvenpaa</name>
</author>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E. Brattico</name>
</author>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Champod, A S" uniqKey="Champod A">A. S. Champod</name>
</author>
<author>
<name sortKey="Hyde, K" uniqKey="Hyde K">K. Hyde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perez, E" uniqKey="Perez E">E. Pérez</name>
</author>
<author>
<name sortKey="Meyer, G" uniqKey="Meyer G">G. Meyer</name>
</author>
<author>
<name sortKey="Harrison, N" uniqKey="Harrison N">N. Harrison</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pylkk Nen, L" uniqKey="Pylkk Nen L">L. Pylkkänen</name>
</author>
<author>
<name sortKey="Marantz, A" uniqKey="Marantz A">A. Marantz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rozin, P" uniqKey="Rozin P">P. Rozin</name>
</author>
<author>
<name sortKey="Royzman, E B" uniqKey="Royzman E">E. B. Royzman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schirmer, A" uniqKey="Schirmer A">A. Schirmer</name>
</author>
<author>
<name sortKey="Kotz, S A" uniqKey="Kotz S">S. A. Kotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scott, G G" uniqKey="Scott G">G. G. Scott</name>
</author>
<author>
<name sortKey="O Donnell, P J" uniqKey="O Donnell P">P. J. O'Donnell</name>
</author>
<author>
<name sortKey="Leuthold, H" uniqKey="Leuthold H">H. Leuthold</name>
</author>
<author>
<name sortKey="Sereno, S C" uniqKey="Sereno S">S. C. Sereno</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Selkirk, E" uniqKey="Selkirk E">E. Selkirk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
<author>
<name sortKey="Walsh, V" uniqKey="Walsh V">V. Walsh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, W F" uniqKey="Thompson W">W. F. Thompson</name>
</author>
<author>
<name sortKey="Marin, M M" uniqKey="Marin M">M. M. Marin</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, W F" uniqKey="Thompson W">W. F. Thompson</name>
</author>
<author>
<name sortKey="Schellenberg, E G" uniqKey="Schellenberg E">E. G. Schellenberg</name>
</author>
<author>
<name sortKey="Husain, G" uniqKey="Husain G">G. Husain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
<author>
<name sortKey="Burnham, D" uniqKey="Burnham D">D. Burnham</name>
</author>
<author>
<name sortKey="Nguyen, S" uniqKey="Nguyen S">S. Nguyen</name>
</author>
<author>
<name sortKey="Grimault, N" uniqKey="Grimault N">N. Grimault</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N. Gosselin</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Veen, V" uniqKey="Van Veen V">V. Van Veen</name>
</author>
<author>
<name sortKey="Carter, C S" uniqKey="Carter C">C. S. Carter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vuilleumier, P" uniqKey="Vuilleumier P">P. Vuilleumier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Widmann, A" uniqKey="Widmann A">A. Widmann</name>
</author>
<author>
<name sortKey="Schroger, E" uniqKey="Schroger E">E. Schröger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Williams, J M G" uniqKey="Williams J">J. M. G. Williams</name>
</author>
<author>
<name sortKey="Mathews, A" uniqKey="Mathews A">A. Mathews</name>
</author>
<author>
<name sortKey="Macleod, C" uniqKey="Macleod C">C. MacLeod</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Woldorff, M G" uniqKey="Woldorff M">M. G. Woldorff</name>
</author>
<author>
<name sortKey="Gallen, C C" uniqKey="Gallen C">C. C. Gallen</name>
</author>
<author>
<name sortKey="Hampson, S A" uniqKey="Hampson S">S. A. Hampson</name>
</author>
<author>
<name sortKey="Hillyard, S A" uniqKey="Hillyard S">S. A. Hillyard</name>
</author>
<author>
<name sortKey="Pantev, C" uniqKey="Pantev C">C. Pantev</name>
</author>
<author>
<name sortKey="Sobel, D" uniqKey="Sobel D">D. Sobel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wong, P C" uniqKey="Wong P">P. C. Wong</name>
</author>
<author>
<name sortKey="Skoe, E" uniqKey="Skoe E">E. Skoe</name>
</author>
<author>
<name sortKey="Russo, N M" uniqKey="Russo N">N. M. Russo</name>
</author>
<author>
<name sortKey="Dees, T" uniqKey="Dees T">T. Dees</name>
</author>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N. Kraus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Woods, D L" uniqKey="Woods D">D. L. Woods</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xu, S" uniqKey="Xu S">S. Xu</name>
</author>
<author>
<name sortKey="Yin, H" uniqKey="Yin H">H. Yin</name>
</author>
<author>
<name sortKey="Wu, D" uniqKey="Wu D">D. Wu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yeung, N" uniqKey="Yeung N">N. Yeung</name>
</author>
<author>
<name sortKey="Botvinick, M M" uniqKey="Botvinick M">M. M. Botvinick</name>
</author>
<author>
<name sortKey="Cohen, J D" uniqKey="Cohen J">J. D. Cohen</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25914659</article-id>
<article-id pub-id-type="pmc">4391227</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2015.00385</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Intonation processing deficits of emotional words among Mandarin Chinese speakers with congenital amusia: an ERP study</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Lu</surname>
<given-names>Xuejing</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/189092"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ho</surname>
<given-names>Hao Tam</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/189269"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Liu</surname>
<given-names>Fang</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/22258"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wu</surname>
<given-names>Daxing</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/189268"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Thompson</surname>
<given-names>William F.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn002">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/67299"></uri>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Department of Psychology, Macquarie University</institution>
<country>Sydney, NSW, Australia</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Medical Psychological Institute, The Second Xiangya Hospital, Central South University</institution>
<country>Changsha, China</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Department of Speech, Hearing and Phonetic Sciences, University College London</institution>
<country>London, UK</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Lauren Stewart, Goldsmiths, University of London, UK</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Piia Astikainen, University of Jyväskylä, Finland; Mari Tervaniemi, University of Helsinki, Finland</p>
</fn>
<corresp id="fn001">*Correspondence: Daxing Wu, Medical Psychological Institute, The Second Xiangya Hospital, Central South University, No.139 Middle Renmin Road, Changsha 410011, China
<email xlink:type="simple">wudaxing2012@126.com</email>
;</corresp>
<corresp id="fn002">William F. Thompson, Department of Psychology, Macquarie University, Sydney, NSW 2109, Australia
<email xlink:type="simple">bill.thompson@mq.edu.au</email>
</corresp>
<fn fn-type="other" id="fn003">
<p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Psychology</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>09</day>
<month>4</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>6</volume>
<elocation-id>385</elocation-id>
<history>
<date date-type="received">
<day>15</day>
<month>10</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>18</day>
<month>3</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2015 Lu, Ho, Liu, Wu and Thompson.</copyright-statement>
<copyright-year>2015</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>
<bold>Background:</bold>
Congenital amusia is a disorder that is known to affect the processing of musical pitch. Although individuals with amusia rarely show language deficits in daily life, a number of findings point to possible impairments in speech prosody that amusic individuals may compensate for by drawing on linguistic information. Using EEG, we investigated (1) whether the processing of speech prosody is impaired in amusia and (2) whether emotional linguistic information can compensate for this impairment.</p>
<p>
<bold>Method:</bold>
Twenty Chinese amusics and 22 matched controls were presented pairs of emotional words spoken with either statement or question intonation while their EEG was recorded. Their task was to judge whether the intonations were the same.</p>
<p>
<bold>Results:</bold>
Amusics exhibited impaired performance on the intonation-matching task for emotional linguistic information, as their performance was significantly worse than that of controls. EEG results showed a reduced N2 response to incongruent intonation pairs in amusics compared with controls, which likely reflects impaired conflict processing in amusia. However, our EEG results also indicated that amusics were intact in early sensory auditory processing, as revealed by a comparable N1 modulation in both groups.</p>
<p>
<bold>Conclusion:</bold>
We propose that the impairment in discriminating speech intonation observed among amusic individuals may arise from an inability to access information extracted at early processing stages. This, in turn, could reflect a disconnection between low-level and high-level processing.</p>
</abstract>
<kwd-group>
<kwd>congenital amusia</kwd>
<kwd>intonation processing</kwd>
<kwd>pitch perception</kwd>
<kwd>conflict processing</kwd>
<kwd>ERP</kwd>
</kwd-group>
<counts>
<fig-count count="4"></fig-count>
<table-count count="3"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="77"></ref-count>
<page-count count="12"></page-count>
<word-count count="9280"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Congenital amusia is a disorder that impacts individuals' ability to discriminate musical pitch. This impairment cannot be explained by hearing or neurological problems, low intelligence, or lack of exposure to music (Ayotte et al.,
<xref rid="B6" ref-type="bibr">2002</xref>
). Instead, it has been linked to a neurodevelopmental failure that renders amusic individuals unable to form stable mental representations of pitch (Patel,
<xref rid="B47" ref-type="bibr">2003</xref>
,
<xref rid="B48" ref-type="bibr">2008</xref>
). An important question is whether the pitch deficit accompanying congenital amusia is specific to music or extends to speech perception. Though individuals with amusia rarely report language problems in everyday life (Jiang et al.,
<xref rid="B19" ref-type="bibr">2010</xref>
; Liu et al.,
<xref rid="B33" ref-type="bibr">2010</xref>
) and show normal intonation processing when pitch contrasts are large (Ayotte et al.,
<xref rid="B6" ref-type="bibr">2002</xref>
; Peretz et al.,
<xref rid="B53" ref-type="bibr">2002</xref>
), evidence suggests that amusia does have an effect on individuals' language abilities to some degree. For example, studies have shown that amusics exhibit deficits in processing of lexical tone (Nan et al.,
<xref rid="B41" ref-type="bibr">2010</xref>
; Liu et al.,
<xref rid="B31" ref-type="bibr">2012</xref>
). Additionally, they are reported to have difficulties processing linguistic and emotional prosody in speech (Patel et al.,
<xref rid="B52" ref-type="bibr">2008</xref>
; Jiang et al.,
<xref rid="B19" ref-type="bibr">2010</xref>
; Liu et al.,
<xref rid="B33" ref-type="bibr">2010</xref>
; Thompson et al.,
<xref rid="B66" ref-type="bibr">2012</xref>
).</p>
<p>Speech prosody refers to the meaningful and sometimes paralinguistic acoustic attributes of speech, including pitch, timing, timbre, and intensity. Intonation—the pitch contour of a spoken utterance or “tone of voice”—is one aspect of speech prosody (Selkirk,
<xref rid="B63" ref-type="bibr">1995</xref>
). When intonation is used to make linguistic distinctions such as the distinction between a question and a statement, it is also referred to as linguistic pitch. The finding that amusic individuals are impaired at processing linguistic pitch suggests that pitch processing is a domain-general function that is engaged when perceiving both music and speech. This possibility aligns with results from studies showing that musical training can lead to enhanced performance on speech perception tasks, including phonological processing (Anvari et al.,
<xref rid="B4" ref-type="bibr">2002</xref>
), speech prosody perception (Thompson et al.,
<xref rid="B67" ref-type="bibr">2004</xref>
; see also Musacchia et al.,
<xref rid="B39" ref-type="bibr">2007</xref>
), linguistic pitch encoding (Wong et al.,
<xref rid="B74" ref-type="bibr">2007</xref>
), and lexical tone identification (Lee and Hung,
<xref rid="B30" ref-type="bibr">2008</xref>
). It has been argued that such positive “transfer effects” are possible because the brain networks involved in speech and music processing overlap (Patel,
<xref rid="B49" ref-type="bibr">2011</xref>
).</p>
<p>The ability to process speech prosody is important in daily human communication. Not only does prosody convey linguistic information, it enables listeners to infer a speaker's emotional state. Thompson et al. (
<xref rid="B66" ref-type="bibr">2012</xref>
) found that individuals with amusia exhibit reduced sensitivity to emotional prosody (e.g., happy, sad, and irritated). Nonetheless, such deficits in intonation processing and emotional prosody recognition may not pose a significant problem for amusic individuals when contextual, facial, and linguistic cues are available. As such, impairments to speech perception exhibited by amusic individuals that have been observed in laboratory conditions may disappear in naturalized settings. Indeed, Ayotte et al. (
<xref rid="B6" ref-type="bibr">2002</xref>
) observed that amusic participants were able to discriminate spoken sentences with statement and question intonation, yet showed difficulties processing non-speech analogs in which all linguistic information was filtered out (see also Patel et al.,
<xref rid="B50" ref-type="bibr">2005</xref>
; Hutchins et al.,
<xref rid="B18" ref-type="bibr">2010</xref>
). One interpretation of this finding is that without linguistic information, prosodic information is processed via the (compromised) music mode, resulting in reduced sensitivity; in contrast, the presence of linguistic information might encourage processing via an intact speech mode, preserving sensitivity to speech prosody. It is unclear, however, whether the content of that linguistic information is relevant to this effect. In view of these findings, we examined whether explicit emotional (semantic) cues influence the ability of individuals with amusia to detect subtle pitch changes in speech.</p>
<p>Emotional linguistic information has been shown to facilitate stimulus processing. For example, in the so-called “emotional Stroop” task, in which perceivers are required to name the color of an emotional versus a non-emotional printed word, the former usually gives rise to faster reaction times than the latter (for a review see, e.g., Williams et al.,
<xref rid="B72" ref-type="bibr">1996</xref>
). These results align with a number of findings showing that affective stimuli, such as facial expressions and dangerous animals (e.g., snakes, spiders, etc.), speed up reaction times in visual search tasks (e.g., Fox et al.,
<xref rid="B15" ref-type="bibr">2000</xref>
). Emotional information is generally thought to “grab” perceivers' attention, leading to greater allocation of resources to the stimulus, which, in turn, leads to deeper stimulus processing (for reviews see Compton,
<xref rid="B10a" ref-type="bibr">2003</xref>
; Vuilleumier,
<xref rid="B70" ref-type="bibr">2005</xref>
). Although some evidence suggests that negative emotional information leads to greater behavioral facilitation than positive emotional information (e.g., Hansen and Hansen,
<xref rid="B17" ref-type="bibr">1988</xref>
; Öhman et al.,
<xref rid="B44" ref-type="bibr">2001</xref>
; for a review on “negative bias” see Rozin and Royzman,
<xref rid="B60" ref-type="bibr">2001</xref>
), other evidence indicates that positive stimuli (e.g., “kiss”) can improve performance as effectively as negative stimuli (e.g., “terror”) in tasks, such as the “flanker” and “Simon task” (e.g., Kanske and Kotz,
<xref rid="B21" ref-type="bibr">2010</xref>
,
<xref rid="B22" ref-type="bibr">2011a</xref>
,
<xref rid="B23" ref-type="bibr">b</xref>
,
<xref rid="B24" ref-type="bibr">c</xref>
).</p>
<p>The Stroop, Simon, and flanker tasks all induce a response conflict which typically elicits a negative-going ERP component, namely the N2, that peaks between 200 and 350 ms after stimulus onset (for a review see Folstein and Van Petten,
<xref rid="B14" ref-type="bibr">2008</xref>
). This component has also been shown to be elicited by conflicts between stimulus representations (Yeung et al.,
<xref rid="B77" ref-type="bibr">2004</xref>
). Source localization of the N2 points to neural generators within the anterior cingulate cortex (ACC; Van Veen and Carter,
<xref rid="B69" ref-type="bibr">2002</xref>
), an area that has been implicated in “conflict monitoring” (Carter,
<xref rid="B10" ref-type="bibr">1998</xref>
; Botvinick et al.,
<xref rid="B9" ref-type="bibr">1999</xref>
,
<xref rid="B8" ref-type="bibr">2004</xref>
). In addition to faster reaction times, Kanske and Kotz (
<xref rid="B22" ref-type="bibr">2011a</xref>
,
<xref rid="B23" ref-type="bibr">b</xref>
) observed a conflict-related negativity peaking around 230 ms after stimulus onset that was enhanced for both positive and negative words when compared with neutral words. The time window and characteristic of this conflict-related negativity resembles closely that of the N2.</p>
<p>Findings by Peretz et al. (
<xref rid="B55" ref-type="bibr">2005</xref>
) indicate that brain activity within the N2 time window appears to be impaired in amusia. More specifically, amusics showed a normal N2 response to unexpected small pitch changes (e.g., 25 cents), but they “overreacted” to large pitch changes (e.g., 200 cents) by eliciting an abnormally enlarged N2 when compared to control participants. Nonetheless, Peretz et al. (
<xref rid="B55" ref-type="bibr">2005</xref>
) interpreted amusics' ability to track the quarter-tone pitch difference as indicative of functional neural circuitry underlying implicit perception of fine-grained pitch differences. The observed pitch impairment in amusics arises, according to Peretz et al. (
<xref rid="B55" ref-type="bibr">2005</xref>
,
<xref rid="B54" ref-type="bibr">2009</xref>
), at a later, explicit stage of processing, as suggested by a larger P3 (Peretz et al.,
<xref rid="B55" ref-type="bibr">2005</xref>
) and the absence of P600 (Peretz et al.,
<xref rid="B54" ref-type="bibr">2009</xref>
) in response to pitch changes in amusics in comparison with controls.</p>
<p>This view has received further support from studies showing normal auditory N1 responses to pitch changes in amusics (Peretz et al.,
<xref rid="B55" ref-type="bibr">2005</xref>
; Moreau et al.,
<xref rid="B37" ref-type="bibr">2009</xref>
). The N1 is a negative-going ERP component that arises between 50 and 150 ms after stimulus onset (e.g., Näätänen and Picton,
<xref rid="B40" ref-type="bibr">1987</xref>
; Giard et al.,
<xref rid="B16" ref-type="bibr">1994</xref>
; Woods,
<xref rid="B75" ref-type="bibr">1995</xref>
). Its neural generators have been localized within the auditory cortex (Näätänen and Picton,
<xref rid="B40" ref-type="bibr">1987</xref>
), suggesting that this component reflects relatively early auditory processing. In contrast to the earlier findings on N1 responses, recent results by Jiang et al. (
<xref rid="B20" ref-type="bibr">2012</xref>
) and Albouy et al. (
<xref rid="B2" ref-type="bibr">2013</xref>
) indicate that pitch processing in amusics may indeed be impaired at early stages of processing, in that the N1 amplitude was significantly smaller for amusics than controls during intonation comprehension (Jiang et al.,
<xref rid="B20" ref-type="bibr">2012</xref>
) and melodic processing (Albouy et al.,
<xref rid="B2" ref-type="bibr">2013</xref>
). Impairments at such an early stage may have consequences for subsequent processes. However, it is unclear whether the pitch deficit exhibited by amusics may be compensated for with linguistic (semantic) cues, where processing takes place relatively late (i.e., ~300–400 ms; for reviews see Pylkkänen and Marantz,
<xref rid="B59" ref-type="bibr">2003</xref>
; Kutas and Federmeier,
<xref rid="B29" ref-type="bibr">2011</xref>
). However, findings from ERP research suggest that the emotional content of a (visually presented) word is accessed very early, within 100–200 ms after stimulus onset (e.g., Ortigue et al.,
<xref rid="B45" ref-type="bibr">2004</xref>
; Scott et al.,
<xref rid="B62" ref-type="bibr">2009</xref>
; Palazova et al.,
<xref rid="B46" ref-type="bibr">2011</xref>
; Kissler and Herbert,
<xref rid="B27" ref-type="bibr">2013</xref>
). Such early processing is thought to be possible via a fast subcortical (thalamao-amygadala) pathway (Morris et al.,
<xref rid="B38" ref-type="bibr">1999</xref>
). Therefore, the early access of emotional semantic information and its facilitative effect on conflict processing could help amusic perceivers overcome any difficulty in discriminating linguistic pitch.</p>
<p>To address this question, we presented emotional words spoken with intonation that indicated either a statement or a question, and recorded EEG responses in individuals with and without amusia. The linguistic content of the words had either a positive valence, such as “joy,” or a negative valence, such as “ugly.” The task was to judge whether two successively presented words were the same in intonation. If amusics make use of linguistic information to compensate for any impairment in intonation processing, they should perform as well as control participants on the intonation-matching task. However, emotional semantic cues may be insufficient to facilitate subsequent processing in amusic individuals. In this case, we would expect to see differences in brain activity between amusic and control participants within an early time window, such as that of the N1 component. Alternatively, early, implicit auditory processes may be intact in amusics and the observed pitch impairment may arise only at a later, explicit processing stage (e.g., N2). In this case, amusic participants should show comparable brain activity to normal controls within the early but not late time window.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Participants</title>
<p>Twenty individuals with congenital amusia (17 females; age:
<italic>M</italic>
= 21.85 years,
<italic>SD</italic>
= 2.11 years; year of education:
<italic>M</italic>
= 15.25 years,
<italic>SD</italic>
= 2.10 years) and 22 matched control participants (16 females; age:
<italic>M</italic>
= 20.68 years,
<italic>SD</italic>
= 1.81 years; year of education:
<italic>M</italic>
= 14.32 years,
<italic>SD</italic>
= 1.25 years) were tested. All participants were Mandarin native speakers and right-handed. None reported any auditory, neurological, or psychiatric disorder. No one had taken private music lessons or other extracurricular music training beyond basic music education at school. All participants gave written informed consent prior to the study. The Ethics Committee of the Second Xiangya Hospital approved the experimental protocol. Participants with a mean global percentage correct lower than 71.7% in the Montreal Battery of Evaluation of Amusia (MBEA; Peretz et al.,
<xref rid="B56" ref-type="bibr">2003</xref>
) were classified as amusic, corresponding to 2SD below the mean score of the Chinese norms (Nan et al.,
<xref rid="B41" ref-type="bibr">2010</xref>
). The MBEA consists of three melodic pitch-based tests (Scale, Contour and Interval), two time-based tests (Rhythm and Meter) and one memory test (Memory). For the first four subtests, listeners are presented with pairs of melodies and asked to judge whether they are the “same” or “different.” For the last two subtests, listeners are presented with a single melody on each trial. For the Meter subtest, participants are required to judge whether the presented melody is a “March” or a “Waltz.” In the Memory subtest, participants are required to judge whether they have heard the presented melody in the preceding subtests. The results of the MBEA and its subtests for both groups are shown in Table
<xref ref-type="table" rid="T1">1</xref>
.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Participants' mean proportion correct responses (standard deviations in parentheses) and independent-samples
<italic>t</italic>
-tests results on the MBEA and its subtests between amusic and control groups</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th align="center" rowspan="1" colspan="1">
<bold>Amusics (
<italic>n</italic>
= 20)</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>Controls (
<italic>n</italic>
= 22)</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>
<italic>t</italic>
-value</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>
<italic>p</italic>
-value (2-tailed)</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>Cohen's
<italic>d</italic>
</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Scale</td>
<td align="center" rowspan="1" colspan="1">0.63 (0.08)</td>
<td align="center" rowspan="1" colspan="1">0.92 (0.06)</td>
<td align="center" rowspan="1" colspan="1">13.42</td>
<td align="center" rowspan="1" colspan="1"><0.01</td>
<td align="center" rowspan="1" colspan="1">4.13</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Contour</td>
<td align="center" rowspan="1" colspan="1">0.66 (0.09)</td>
<td align="center" rowspan="1" colspan="1">0.93 (0.07)</td>
<td align="center" rowspan="1" colspan="1">10.92</td>
<td align="center" rowspan="1" colspan="1"><0.01</td>
<td align="center" rowspan="1" colspan="1">3.37</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Interval</td>
<td align="center" rowspan="1" colspan="1">0.59 (0.06)</td>
<td align="center" rowspan="1" colspan="1">0.89 (0.07)</td>
<td align="center" rowspan="1" colspan="1">15.79</td>
<td align="center" rowspan="1" colspan="1"><0.01</td>
<td align="center" rowspan="1" colspan="1">4.58</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Rhythm</td>
<td align="center" rowspan="1" colspan="1">0.71 (0.11)</td>
<td align="center" rowspan="1" colspan="1">0.91 (0.07)</td>
<td align="center" rowspan="1" colspan="1">7.36</td>
<td align="center" rowspan="1" colspan="1"><0.01</td>
<td align="center" rowspan="1" colspan="1">2.19</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Meter</td>
<td align="center" rowspan="1" colspan="1">0.64 (0.17)</td>
<td align="center" rowspan="1" colspan="1">0.85 (0.15)</td>
<td align="center" rowspan="1" colspan="1">4.19</td>
<td align="center" rowspan="1" colspan="1"><0.01</td>
<td align="center" rowspan="1" colspan="1">1.31</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Memory</td>
<td align="center" rowspan="1" colspan="1">0.72 (0.11)</td>
<td align="center" rowspan="1" colspan="1">0.96 (0.04)</td>
<td align="center" rowspan="1" colspan="1">8.88</td>
<td align="center" rowspan="1" colspan="1"><0.01</td>
<td align="center" rowspan="1" colspan="1">2.96</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Global score</td>
<td align="center" rowspan="1" colspan="1">0.66 (0.03)</td>
<td align="center" rowspan="1" colspan="1">0.91 (0.04)</td>
<td align="center" rowspan="1" colspan="1">22.66</td>
<td align="center" rowspan="1" colspan="1"><0.01</td>
<td align="center" rowspan="1" colspan="1">7.02</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>Individuals with amusia scored significantly lower than control participants on all subtests of the MBEA (ps < 0.01)</italic>
.</p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>Stimuli</title>
<p>The stimulus material consisted of a set of 40 disyllabic words from the Chinese Affective Words Categorize System (CAWCS; Xu et al.,
<xref rid="B76" ref-type="bibr">2008</xref>
), which comprises 230 positive (e.g., “joy,” “happy,” and “excited”) and negative (e.g., “ugly,” “depressed,” and “poor”) words. All words from the CAWCS were recorded by an adult male Mandarin native speaker who spoke each word as a statement and as a question. Seven Mandarin native speakers (5 females) were asked to rate on a five-point scale how well the intonations were recognized as a statement or a question (1 = definitely a statement, 5 = definitely a question). Twenty positive and twenty negative words, whose rating scores were equal to or lower than 2 in statement-intonation and equal to or higher than 3.5 in question-intonation were selected. This corresponds approximately to the 30 and 70 percentiles of the ratings respectively. Independent-samples
<italic>t</italic>
-tests confirmed that the selected negative and positive words yielded similar mean rating scores in both statement and question conditions (
<italic>ps</italic>
> 0.35, see Table
<xref ref-type="table" rid="T2">2</xref>
). Additional one-sample
<italic>t</italic>
-tests indicated that the mean valence, arousal, and familiarity scores for the 40 selected words were not significantly different than that of the 230 words from the CAWCS (
<italic>ps</italic>
> 0.1). However, a comparison of the selected positive and negative words revealed that the former were rated as more arousing and more familiar than the latter (
<italic>ps</italic>
< 0.01, see Table
<xref ref-type="table" rid="T3">3</xref>
)
<xref ref-type="fn" rid="fn0001">
<sup>1</sup>
</xref>
Using a cross-splicing technique (for more details see Patel et al.,
<xref rid="B51" ref-type="bibr">1998</xref>
), we ensured that the first syllables were acoustically identical and the durations of the second syllables were roughly equal. Figure
<xref ref-type="fig" rid="F1">1A</xref>
shows the spectrogram and pitch contours of a negative word spoken with a statement-intonation and a question-intonation. As in Jiang et al. (
<xref rid="B19" ref-type="bibr">2010</xref>
), each word was set to be 850 ms, that is, each syllable lasted 400 ms and there was a 50 ms silence between the two syllables.</p>
<table-wrap id="T2" position="float">
<label>Table 2</label>
<caption>
<p>
<bold>The mean intonation rating of the selected words across 7 raters (standard deviations in parentheses) and the independent-samples
<italic>t</italic>
-tests results comparing positive and negative words</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th align="center" rowspan="1" colspan="1">
<bold>Positive words</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>Negative words</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>
<italic>t</italic>
-value</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>
<italic>p</italic>
-value (2-tailed)</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>Cohen's
<italic>d</italic>
</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Statement</td>
<td align="center" rowspan="1" colspan="1">1.25 (0.21)</td>
<td align="center" rowspan="1" colspan="1">1.19 (0.18)</td>
<td align="center" rowspan="1" colspan="1">0.94</td>
<td align="center" rowspan="1" colspan="1">0.35</td>
<td align="center" rowspan="1" colspan="1">0.31</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Question</td>
<td align="center" rowspan="1" colspan="1">4.18 (0.24)</td>
<td align="center" rowspan="1" colspan="1">4.19 (0.25)</td>
<td align="center" rowspan="1" colspan="1">0.18</td>
<td align="center" rowspan="1" colspan="1">0.86</td>
<td align="center" rowspan="1" colspan="1">0.04</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T3" position="float">
<label>Table 3</label>
<caption>
<p>
<bold>The mean rating of valence, arousal and familiarity of the selected words (standard deviations in parentheses) and the independent-samples
<italic>t</italic>
-tests results on each dimension comparing positive and negative words</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th align="center" rowspan="1" colspan="1">
<bold>Positive words</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>Negative words</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>
<italic>t</italic>
-value</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>
<italic>p</italic>
-value (2-tailed)</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>Cohen's
<italic>d</italic>
</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Valence</td>
<td align="center" rowspan="1" colspan="1">7.28 (0.57)</td>
<td align="center" rowspan="1" colspan="1">2.72 (0.30)</td>
<td align="center" rowspan="1" colspan="1">31.57</td>
<td align="center" rowspan="1" colspan="1"><0.01</td>
<td align="center" rowspan="1" colspan="1">10.01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Arousal</td>
<td align="center" rowspan="1" colspan="1">6.54 (0.50)</td>
<td align="center" rowspan="1" colspan="1">5.42 (0.42)</td>
<td align="center" rowspan="1" colspan="1">7.66</td>
<td align="center" rowspan="1" colspan="1"><0.01</td>
<td align="center" rowspan="1" colspan="1">2.43</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Familiarity</td>
<td align="center" rowspan="1" colspan="1">6.06 (0.66)</td>
<td align="center" rowspan="1" colspan="1">4.55 (0.52)</td>
<td align="center" rowspan="1" colspan="1">8.06</td>
<td align="center" rowspan="1" colspan="1"><0.01</td>
<td align="center" rowspan="1" colspan="1">2.54</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>CAWCS employed eight-point scale (1 = lowest, 8 = highest) to evaluate the valence, arousal, and familiarity of each word</italic>
.</p>
</table-wrap-foot>
</table-wrap>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>(A)</bold>
Spectrogram and pitch contours of a pair of stimuli used in the task and
<bold>(B)</bold>
the scheme of trial timeline. The negative word “
<inline-graphic xlink:href="fpsyg-06-00385-i0001.jpg"></inline-graphic>
” (nan2kan4), which means “ugly,” as a statement (left panel) and a question (right panel). The mean F
<sub>0</sub>
of statement- and question-intonation was identical in terms of the first syllable (positive words:
<italic>M</italic>
= 179.74 Hz,
<italic>SD</italic>
= 39.23 Hz; negative words:
<italic>M</italic>
= 194.35 Hz,
<italic>SD</italic>
= 80.56 Hz), but it was different in terms of the second syllable (positive statement:
<italic>M</italic>
= 170.57 Hz,
<italic>SD</italic>
= 62.76 Hz; positive question:
<italic>M</italic>
= 223.88 Hz,
<italic>SD</italic>
= 57.51 Hz; negative statement:
<italic>M</italic>
= 199.89 Hz,
<italic>SD</italic>
= 80.69 Hz; negative question:
<italic>M</italic>
= 226.64 Hz,
<italic>SD</italic>
= 64.77 Hz). All trials started with a 2000 Hz sinusoidal lasting for 500 ms. After the presentation of the comparison word (two 400 ms syllables with a 50 ms silence between them), a 300 ms silence was presented, followed by the probe word lasting for 850 ms. During the task, participants were asked to fixate on a white cross on a black screen. At the end of each trial, they were required to make a non-speeded response to indicate whether the intonation of the comparison and probe words was the same or different by pressing one of two response keys.</p>
</caption>
<graphic xlink:href="fpsyg-06-00385-g0001"></graphic>
</fig>
</sec>
<sec>
<title>Procedure</title>
<p>Participants were seated in an electrically shielded and sound-attenuated room with dimmed light. They were asked to fixate on a white cross on a black CRT monitor screen. As illustrated in Figure
<xref ref-type="fig" rid="F1">1B</xref>
, each trial began with a warning tone (2000 Hz sinusoidal) of 500 ms. Subsequently, a comparison word was presented, followed by an inter-stimulus interval (ISI) of 300 ms. Thereafter, participants heard the probe word. They were asked to judge whether the intonation of the probe word was the same as that of the comparison word by pressing one of two response keys. The auditory stimuli were presented binaurally at a comfortable listening level via earphones.</p>
</sec>
<sec>
<title>Experimental design</title>
<p>The experiment consisted of 2 blocks separated by a break. All trials were presented in a pseudo-randomized order. Each block consisted of 80 congruent or incongruent trials (20 statement-statement pairs, 20 statement-question pairs, 20 question-question pairs, and 20 question-statement pairs). Prior to the testing, participants completed 4 practice trials to familiarize themselves with the stimuli and task. Feedback was provided in the practice but not the experimental trials. For stimulus presentation and data collection, we employed the software
<italic>Stim2</italic>
(Compumedics Neuroscan, USA).</p>
</sec>
<sec>
<title>EEG recording and pre-processing</title>
<p>The EEG was recorded from 32 electrodes
<italic>Quick-cap</italic>
(standard 10–10 electrode system) with a
<italic>SynAmps RT</italic>
amplifier and the
<italic>SCAN</italic>
software from NeuroScan System (Compumedics Neuroscan, USA). The average of the left and right mastoids served as the reference during recording. Vertical and horizontal eye movements and blinks were monitored with 4 additional electrodes. All electrode impedances were kept below 5 kΩ during the experiment. An online bandpass filter of 0.05–50 Hz was used during the recording. The sampling rate was 500 Hz.</p>
<p>The EEG was processed in
<italic>MATLAB</italic>
(Version R2013b; MathWorks, USA) using the
<italic>EEGLAB</italic>
toolbox (Delorme and Makeig,
<xref rid="B13" ref-type="bibr">2004</xref>
). The data were first highpass filtered with a Windowed Sinc FIR Filter (Widmann and Schröger,
<xref rid="B71" ref-type="bibr">2012</xref>
) from the EEGLAB plugin
<italic>firfilt</italic>
(Version 1.5.3). The cutoff frequency was 2 Hz (Blackman window; filter order: 2750). An independent component analysis (ICA) was performed using the
<italic>runica</italic>
algorithm. Subsequently, an ICA based method for identifying ocular artifacts, such as eye movements and blinks were used (Mognon et al.,
<xref rid="B36" ref-type="bibr">2011</xref>
). Artifactual components were rejected and a lowpass Windowed Sinc FIR Filter with a 20 Hz cutoff frequency (Blackman window; filter order: 138) was applied. Epochs of -500 to 1450 ms from the onset of probe words were extracted and baseline corrected using the 500 ms pre-stimulus time period.</p>
</sec>
<sec>
<title>ERP data analyses</title>
<p>Visual inspection of the grand averages revealed two pronounced negative ERP deflections in the following time windows: 120–180 ms and 250–320 ms after the onset of the second syllable of the probe word. These negativities likely reflect the N1 and N2 components, which typically peak within similar time windows (e.g., Pérez et al.,
<xref rid="B58" ref-type="bibr">2008</xref>
; Peretz et al.,
<xref rid="B54" ref-type="bibr">2009</xref>
; Astheimer and Sanders,
<xref rid="B5" ref-type="bibr">2011</xref>
). For statistical analysis, except for four outer scalp electrodes (T7, T8, O1, O2), all other electrodes were grouped into four regions of interest (ROI): left-anterior (FP1, F3, FC3, F7, FT7), right-anterior (FP1, F4, FC4, F8, FT8), left-posterior (C3, CP3, P3, P7, TP7), and right-posterior (C4, CP4, P4, P8, TP8). The midline electrodes were analyzed separately and grouped into mid-anterior (FZ, FCZ, CZ) and mid-posterior electrodes (CPZ, PZ, OZ). Mean amplitudes were computed for each region of interest and time window (Luck,
<xref rid="B34" ref-type="bibr">2005</xref>
). Separate repeated-measures ANOVAs were conducted on the N1 and N2 time windows. The factors entered into the ANOVAs were: Group (control/amusic), Emotion (positive/negative), Congruence (congruent/incongruent intonation), LR (left/right), and AP (anterior/posterior). The factor LR was excluded from the analyses of the midline electrodes. The statistical results for the N1 and N2 time window are summarized respectively (see Supplementary Material). Partial
<italic>eta squared</italic>
and cohen's
<italic>d</italic>
were used to evaluate the effect size for the ANOVAs and
<italic>t</italic>
-tests, respectively. Below, we will only report in detail main effects and interactions of interest (see the Supplementary Tables
<xref ref-type="supplementary-material" rid="SM1">1</xref>
and
<xref ref-type="supplementary-material" rid="SM2">2</xref>
for full results).</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Task performance</title>
<p>Participants' task performance was evaluated using d-prime (d')—a measure of discriminability or sensitivity (Macmillan and Creelman,
<xref rid="B35" ref-type="bibr">2005</xref>
). D-prime scores were calculated by subtracting the z-score that corresponds to the false-alarm rate from the z-score that corresponds to the hit rate. A standard correction was applied to hit and false-alarm rates of 0 or 1 by replacing them with 0.5/n and (n-0.5)/n, respectively, where n is the number of incongruent or congruent trials (Macmillan and Kaplan,
<xref rid="B35a" ref-type="bibr">1985</xref>
). A repeated-measures ANOVA was conducted on the d' scores with two factors: Group (control/amusic) and Emotion (positive/negative). The results revealed a significant main effect of Group,
<italic>F</italic>
<sub>(1, 40)</sub>
= 11.05,
<italic>p</italic>
< 0.01, η
<sup>2</sup>
= 0.22, but no significant main effect of Emotion,
<italic>F</italic>
<sub>(1, 40)</sub>
= 0.02,
<italic>p</italic>
> 0.90, η
<sup>2</sup>
< 0.01, nor a significant interaction between Emotion and Group,
<italic>F</italic>
<sub>(1, 40)</sub>
= 0.85,
<italic>p</italic>
>0.36, η
<sup>2</sup>
= 0.02. Inspection of the means revealed that individuals with amusia (positive words:
<italic>M</italic>
= 1.56,
<italic>SD</italic>
= 0.94; negative words:
<italic>M</italic>
= 1.63,
<italic>SD</italic>
= 1.01) made more errors than controls (positive words:
<italic>M</italic>
= 2.40,
<italic>SD</italic>
= 0.57; negative words:
<italic>M</italic>
= 2.34,
<italic>SD</italic>
= 0.56) in the matching task.</p>
</sec>
<sec>
<title>EEG results</title>
<sec>
<title>N1 (120–180 ms) time window</title>
<p>Individuals with amusia showed an N1 amplitude comparable to that of normal controls, as confirmed by the non-significant main effect of Group,
<italic>F</italic>
<sub>(1, 40)</sub>
= 0.68,
<italic>p</italic>
= 0.42, η
<sup>2</sup>
= 0.02. Furthermore, a significant Congruence × AP interaction was observed,
<italic>F</italic>
<sub>(1, 40)</sub>
= 5.23,
<italic>p</italic>
< 0.05, η
<sup>2</sup>
= 0.12 Similar to the control participants, amusia participants displayed a reduced N1 amplitude at posterior electrodes in response to incongruent,
<italic>M</italic>
= −0.26,
<italic>SE</italic>
= 0.10, than congruent intonation,
<italic>M</italic>
= −0.43,
<italic>SE</italic>
= 0.08. This congruence effect appears to be constrained to the posterior electrodes, as paired-sample
<italic>t</italic>
-tests yielded a significant difference between the congruent and incongruent condition only at posterior,
<italic>t</italic>
<sub>(41)</sub>
= 2.70,
<italic>p</italic>
< 0.01,
<italic>d</italic>
= 0.42, but not anterior electrode sites,
<italic>t</italic>
<sub>(41)</sub>
= 0.24,
<italic>p</italic>
> 0.81,
<italic>d</italic>
= 0.04 (see Figure
<xref ref-type="fig" rid="F2">2</xref>
). Additionally, the ANOVA revealed a significant interaction involving Emotion, Group, LR, and AP,
<italic>F</italic>
<sub>(1, 40)</sub>
= 5.49,
<italic>p</italic>
< 0.05, η
<sup>2</sup>
= 0.12 When this complex interaction was unpacked by Emotion, we found a significant interaction between Group and AP for negative words,
<italic>F</italic>
<sub>(1, 40)</sub>
= 4.88,
<italic>p</italic>
< 0.05, η
<sup>2</sup>
= 0.11, but not for positive words,
<italic>F</italic>
<sub>(1, 40)</sub>
= 0.06,
<italic>p</italic>
> 0.81, η
<sup>2</sup>
< 0.01. Further analyses with paired-sample
<italic>t</italic>
-tests revealed that normal controls showed a significantly larger N1 response,
<italic>t</italic>
<sub>(21)</sub>
= 3.88,
<italic>p</italic>
< 0.01,
<italic>d</italic>
= 0.83, to negative words at anterior,
<italic>M</italic>
= −0.42,
<italic>SE</italic>
= 0.11, than posterior electrode sites,
<italic>M</italic>
= −0.14,
<italic>SE</italic>
= 0.11. No such topographical difference was found in the amusic group,
<italic>t</italic>
<sub>(19)</sub>
= 0.17,
<italic>p</italic>
> 0.86,
<italic>d</italic>
= 0.04, suggesting that the N1 was broadly distributed in this group (see Figure
<xref ref-type="fig" rid="F3">3</xref>
). A direct comparison between the amusic and control group using an independent-samples
<italic>t</italic>
-test revealed a larger N1 response to negative words for amusic participants,
<italic>M</italic>
= −0.40,
<italic>SE</italic>
= 0.09, as compared to control participants,
<italic>M</italic>
= −0.14,
<italic>SE</italic>
= 0.11, at posterior electrode sites; however, this difference was only marginally significant,
<italic>t</italic>
<sub>(40)</sub>
= 1.88,
<italic>p</italic>
= 0.07,
<italic>d</italic>
= 0.29. At anterior electrode sites, the
<italic>t</italic>
-test yielded no significant difference (
<italic>p</italic>
> 0.97).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>ERP results in response to congruent and incongruent intonations within N1 time window (120–180 ms) and N2 time window (250–320 ms). (A)</bold>
Grand-averaged ERPs at posterior electrode CP4 in response to congruent (blue line) and incongruent intonation (red line) for amusic (upper panel) and for control (lower panel) participants. The time windows of the N1 and N2 were highlighted (in yellow).
<bold>(B)</bold>
Topographic maps of average amplitude (μV) in N1 and N2 time window averaged over all electrodes for amusics (upper panel) and for controls (lower panel).
<bold>(C)</bold>
Mean amplitude averaged over the ROI of posterior electrode sites within N1 time window for congruent trials (blue bar) and incongruent trials (red bar).
<bold>(D)</bold>
Mean amplitude averaged over the ROI of all electrode sites within N2 time window for congruent trials (blue bar) and incongruent trials (red bar). Error bars represent 1 SEM.</p>
</caption>
<graphic xlink:href="fpsyg-06-00385-g0002"></graphic>
</fig>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>ERP results in response to positive and negative words within N1 time window (120–180 ms). (A)</bold>
Grand-averaged ERPs at posterior electrode PZ in response to positive (blue line) and negative words (red line) for amusic (upper panel) and for control (lower panel) participants. The time window of the N1 was highlighted (in yellow).
<bold>(B)</bold>
Topographic maps of average amplitude (μV) in N1 time window averaged over all electrodes.
<bold>(C)</bold>
Mean amplitude averaged over the ROI of anterior (blue bar) and posterior (red bar) electrode sites in response to negative words within N1 time window. Error bars represent 1 SEM.</p>
</caption>
<graphic xlink:href="fpsyg-06-00385-g0003"></graphic>
</fig>
</sec>
<sec>
<title>N2 (250–320 ms) time window</title>
<p>In contrast to the N1 time window, amusic participants showed different ERPs in comparison to control participants in response to congruent and incongruent intonations within the N2 time window (see Figure
<xref ref-type="fig" rid="F2">2</xref>
). This was confirmed by the repeated-measures ANOVAs, which yielded a significant Group difference in the ROIs as well as at the midline electrodes,
<italic>F</italic>
<sub>(1, 40)</sub>
= 6.35,
<italic>p</italic>
< 0.05, η
<sup>2</sup>
= 0.14, and
<italic>F</italic>
<sub>(1, 40)</sub>
= 6.43,
<italic>p</italic>
< 0.05, η
<sup>2</sup>
= 0.14, respectively. In addition, Group factor showed a marginally significant interaction with Congruence factor,
<italic>F</italic>
<sub>(1, 40)</sub>
= 3.87,
<italic>p</italic>
< 0.06, η
<sup>2</sup>
= 0.09, which was further analyzed with two independent-sample
<italic>t</italic>
-tests. The results revealed that amusic participants elicited a smaller N2 amplitude,
<italic>M</italic>
= 0.08,
<italic>SE</italic>
= 0.09, than control participants,
<italic>M</italic>
= −0.32,
<italic>SE</italic>
= 0.08, in the incongruent condition,
<italic>t</italic>
<sub>(40)</sub>
= 3.28,
<italic>p</italic>
< 0.01,
<italic>d</italic>
= 0.51. No such difference was found in the congruent condition,
<italic>t</italic>
<sub>(40)</sub>
= 0.55,
<italic>p</italic>
> 0.58,
<italic>d</italic>
= 0.08. It should be noted, however, that although visual inspection indicated that control participants exhibited a large N2 response to incongruent in comparison to congruent probe words,
<italic>M</italic>
= −0.05,
<italic>SE</italic>
= 0.10, this difference was only marginally significant when probed with a paired-sample
<italic>t</italic>
-test,
<italic>t</italic>
<sub>(21)</sub>
= 1.92,
<italic>p</italic>
= 0.07,
<italic>d</italic>
= 0.41. Amusic participants showed no such trend toward the congruence effect,
<italic>t</italic>
<sub>(19)</sub>
= 0.73,
<italic>p</italic>
> 0.47,
<italic>d</italic>
= 0.16. Finally, the results included a significant main effect of Emotion in the ROIs,
<italic>F</italic>
<sub>(1, 40)</sub>
= 17.63,
<italic>p</italic>
< 0.01, η
<sup>2</sup>
= 0.31, as well as at the midline electrodes,
<italic>F</italic>
<sub>(1, 40)</sub>
= 11.60,
<italic>p</italic>
< 0.01, η
<sup>2</sup>
= 0.23 The means computed across the ROIs point to a larger N2 amplitude for positive,
<italic>M</italic>
= −0.20,
<italic>SE</italic>
= 0.06, as compared to negative words,
<italic>M</italic>
= −0.04,
<italic>SE</italic>
= 0.06. Emotion and Group did not interact significantly in the four ROIs,
<italic>F</italic>
<sub>(1, 40)</sub>
= 0.34,
<italic>p</italic>
= 0.56, η
<sup>2</sup>
= 0.01, suggesting that amusic and control participants showed a similar effect of Emotion (see Figure
<xref ref-type="fig" rid="F4">4</xref>
).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>ERP results in response to positive and negative words within N2 time window (250–320 ms). (A)</bold>
Grand-averaged ERPs at fronto-central electrode FZ in response to positive (blue line) and negative words (red line) for amusic (upper panel) and for control (lower panel) participants. The time window of the N2 was highlighted (in yellow).
<bold>(B)</bold>
Topographic maps of average amplitude (μV) in N2 time window averaged over all electrodes.
<bold>(C)</bold>
Mean amplitude averaged over the ROI of all electrode sites within N2 time window for positive (blue bar) and negative words (red bar). Error bars represent 1 SEM.</p>
</caption>
<graphic xlink:href="fpsyg-06-00385-g0004"></graphic>
</fig>
<p>In summary, our main findings showed that amusic participants made more errors compared with control participants in the intonation matching task, despite the emotional content of the words presented. In terms of brain activities, both groups exhibited similar N1 response to the conflicting intonations as hypothesized (Peretz et al.,
<xref rid="B55" ref-type="bibr">2005</xref>
; Moreau et al.,
<xref rid="B37" ref-type="bibr">2009</xref>
). However, the N1 elicited by negative words was marginally larger in amusics than in controls at posterior electrode sites. Finally, when compared to controls, amusics showed a significantly reduced N2 amplitude in response to incongruent intonation.</p>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>The present study investigated three related questions. First, do individuals with congenital amusia show impairment in processing speech prosody? Second, can amusic participants make use of emotional information to compensate for any impairment in speech prosody processing? Third, does the impairment in pitch processing in amusia arise from an early or late stage of processing? To address these questions, we measured the brain activities of participants with and without congenital amusia using EEG. Participants were presented with pairs of positive (e.g., “joy”) or negative spoken words (e.g., “ugly”) successively. The pairs were congruent or incongruent in terms of speech intonation, which could indicate a statement or a question. Participants were asked to indicate whether the word pairs had the same or different intonation.</p>
<p>As speakers of a tone language, Mandarin Chinese amusics may be sensitive to linguistic pitch owing to constant exposure to daily communication with small changes in pitch (for a discussion, see Stewart and Walsh,
<xref rid="B65" ref-type="bibr">2002</xref>
; Stewart,
<xref rid="B64" ref-type="bibr">2006</xref>
). However, the present results indicate that amusic participants had difficulty discriminating between statements and questions. This finding is consistent with other evidence that Mandarin amusics exhibit mild deficits in intonation identification and discrimination in comparison with controls (Jiang et al.,
<xref rid="B19" ref-type="bibr">2010</xref>
). More generally, the failure in linguistic pitch discrimination among tone language speakers with amusia challenges the view that amusia is a disorder specific to musical pitch perception (Ayotte et al.,
<xref rid="B6" ref-type="bibr">2002</xref>
; Peretz et al.,
<xref rid="B53" ref-type="bibr">2002</xref>
), as the musical pitch impairment extended to the domain of language (see also, Patel et al.,
<xref rid="B52" ref-type="bibr">2008</xref>
; Nguyen et al.,
<xref rid="B42" ref-type="bibr">2009</xref>
; Jiang et al.,
<xref rid="B19" ref-type="bibr">2010</xref>
; Liu et al.,
<xref rid="B33" ref-type="bibr">2010</xref>
; Nan et al.,
<xref rid="B41" ref-type="bibr">2010</xref>
; Tillmann et al.,
<xref rid="B68" ref-type="bibr">2011</xref>
). It should be emphasized, however, that there is considerable debate concerning the degree to which musical pitch impairment negatively impacts upon linguistic pitch perception. A number of studies have shown that linguistic pitch discrimination is significantly worse among amusics when semantic information is artificially removed (i.e., when only prosody is presented in non-speech analogs) than when natural speech is presented (e.g., Ayotte et al.,
<xref rid="B6" ref-type="bibr">2002</xref>
; Patel et al.,
<xref rid="B50" ref-type="bibr">2005</xref>
). This finding implies that amusic individuals can make use of semantic cues to compensate for their pitch deficit, as shown in Liu et al. (
<xref rid="B31" ref-type="bibr">2012</xref>
). In the present study, participants were provided with emotional semantic cues and were asked to match the intonation of negatively or positively valenced words. In order to perform this task successfully, the participants needed to be able to detect the conflict in intonations of comparison and probe words. Although it has been suggested that both positive and negative words can ease conflict processing (Kanske and Kotz,
<xref rid="B21" ref-type="bibr">2010</xref>
,
<xref rid="B22" ref-type="bibr">2011a</xref>
,
<xref rid="B23" ref-type="bibr">b</xref>
,
<xref rid="B24" ref-type="bibr">c</xref>
), thereby facilitating behavioral performance, our behavioral results revealed that the impairment of linguistic intonation discrimination among amusic individuals was still observed when intonation was applied to words with positive or negative emotional valence. This finding suggests that emotional valence failed to facilitate pitch processing in individuals with amusia.</p>
<p>Correspondingly, we found the N2 elicited in conflict trials to be significantly reduced in amusics as compared with controls. As the amplitude of the N2 is typically larger in conflict than non-conflict trials (Nieuwenhuis et al.,
<xref rid="B43" ref-type="bibr">2003</xref>
), this finding further suggests that conflict processing was virtually absent in the amusic group. On the other hand, our ERP results revealed no impaired emotion processing in amusic individuals. Both amusic and control groups exhibited a larger N2 amplitude for positive words as compared with negative words, which likely reflects the higher arousal level ascribed to the positive than negative words employed in the experiment (see Table
<xref ref-type="table" rid="T3">3</xref>
and Supplementary Table
<xref ref-type="supplementary-material" rid="SM2">2</xref>
). These findings suggest that amusics' failure to discriminate between question and statement intonation arises from an impairment related to conflict processing, rather than from an inability to process emotional information. The abnormal N2 observed in the amusic group is in part consistent with the results by Peretz et al. (
<xref rid="B55" ref-type="bibr">2005</xref>
) who also reported abnormal brain activity within the N2 time window in amusic as compared with control participants. However, in contrast to the present study, Peretz et al. (
<xref rid="B55" ref-type="bibr">2005</xref>
) employed an oddball paradigm and found that the amusic brain “overreacted” to unexpected (infrequent) pitch changes by eliciting a larger N2 response than normal controls. Internally generated expectancy caused by stimulus probability has been shown to contribute to the N2 response (see Folstein and Van Petten,
<xref rid="B14" ref-type="bibr">2008</xref>
for a review). Therefore, the greater N2 amplitude in the amusic group observed by Peretz et al. (
<xref rid="B55" ref-type="bibr">2005</xref>
) may partially reflect processes related to expectancy. When, in a later study, the conflicting pitch (an out-of-key note) occurred more frequently and, hence, less unexpectedly, Peretz et al. (
<xref rid="B54" ref-type="bibr">2009</xref>
) observed, similar to the present findings, that controls but not amusics elicited a large N2 response to the conflicting pitch.</p>
<p>Contrary to our results for the N2 response, the reduction in N1 in response to incongruent intonation was similar in amusic and control participants. These results corroborate earlier finding by Jiang et al. (
<xref rid="B20" ref-type="bibr">2012</xref>
), in which participants judged whether aurally-presented discourse was semantically acceptable. The same pattern of N1 in two groups suggested that the underlying process is normal in the amusic group (see also Peretz et al.,
<xref rid="B55" ref-type="bibr">2005</xref>
,
<xref rid="B54" ref-type="bibr">2009</xref>
; Moreau et al.,
<xref rid="B37" ref-type="bibr">2009</xref>
). However, other studies have reported an abnormal N1 response in amusics during intonation comprehension (Jiang et al.,
<xref rid="B20" ref-type="bibr">2012</xref>
) and melodic processing (Albouy et al.,
<xref rid="B2" ref-type="bibr">2013</xref>
). To reconcile these contradictory findings, Albouy et al. (
<xref rid="B2" ref-type="bibr">2013</xref>
) proposed that whether amusic participants show a normal or abnormal N1 may depend on task difficulty. Studies that reported a normal N1 used tasks that were relatively easy, such as a deviant tone detection task (Peretz et al.,
<xref rid="B55" ref-type="bibr">2005</xref>
,
<xref rid="B54" ref-type="bibr">2009</xref>
) or no task at all (Moreau et al.,
<xref rid="B37" ref-type="bibr">2009</xref>
). In contrast, Albouy et al. (
<xref rid="B2" ref-type="bibr">2013</xref>
) and Jiang et al. (
<xref rid="B20" ref-type="bibr">2012</xref>
) employed tasks in which participants had to match two melodies, and judge whether a speech intonation was appropriate or inappropriate given a certain discourse, respectively. These authors found the N1 in individuals with amusia to be abnormal. Our behavioral results suggest that the task we used was difficult for the amusic participants (see the above discussion). Yet, we found a normal N1 for the amusic group.</p>
<p>One explanation is that the emotional words used in the present study led to enhanced attention which, in turn, improved pitch processing in amusic participants. This gave rise to a relatively normal N1 response, despite the observed task difficulty in the amusic group. It should be noted that as neutral words were not included in this study, it is not possible to assess whether emotional valence benefited performance behaviorally. Nonetheless, for reasons that we will elucidate below, it is possible there was a small effect of emotional valence that was insufficient to boost amusic participants' task performance to the level of controls. As suggested by our ERP results, amusic participants were affected by negative words differently than normal controls at an early processing stage, i.e., the N1 time window. More specifically, we observed a larger N1 amplitude in the amusic group in comparison to the control group; however, this difference was only marginally significant and restricted to the posterior electrode sites. The auditory N1 has been shown to be modulated by selective attention and to increase in amplitude when perceivers direct their attention to the stimulus (e.g., Woldorff et al.,
<xref rid="B73" ref-type="bibr">1993</xref>
; Alho et al.,
<xref rid="B3" ref-type="bibr">1994</xref>
; for a review see, e.g., Schirmer and Kotz,
<xref rid="B61" ref-type="bibr">2006</xref>
). Thus, the larger N1 response displayed by the amusic participants could reflect enhanced attention to the negative words.</p>
<p>No significant group difference at either anterior or posterior electrode sites was found in the positive word condition. Negative stimuli have been shown to lead to better performance than positive stimuli (e.g., Hansen and Hansen,
<xref rid="B17" ref-type="bibr">1988</xref>
; Öhman et al.,
<xref rid="B44" ref-type="bibr">2001</xref>
), which suggests that negative stimuli are more effective in capturing attention than positive stimuli. This has often served as an argument in support of the “negativity bias” hypothesis according to which we may have developed some adaptive mechanisms to deal with negative emotions (for a review see Rozin and Royzman,
<xref rid="B60" ref-type="bibr">2001</xref>
). It should be noted that when examining the N1 response at anterior and posterior electrode sites within each group, we found in the control group a significantly larger N1 response to negative words at posterior than at anterior electrodes. In contrast, the amusic group showed comparable N1 amplitudes at both electrode sites. The broad scalp distribution of the N1 response displayed by amusic participants could indicate some additional activation of posterior brain areas that were not present in normal participants. Consistent with the notion of enhanced attention in the amusic group, these additional areas may be linked to attentional processes.</p>
<p>In short, our results suggest that amusics may process emotional words (negative valenced in the present study) in a manner that differs from individuals without this impairment, potentially compensating for their disorder. However, this enhanced processing may not have been sufficient to improve the amusic participants' performance. Our failure to find a clear emotion effect in the behavioral and ERP data may be due to the low arousal level of the emotional words we used, e.g., “ugly.” In comparison, Kanske and Kotz (
<xref rid="B24" ref-type="bibr">2011c</xref>
), for instance, used words such as “terror,” which elicited clear emotion effects. This may also explain why we did not observe a “negativity bias” in our control group, as the negative words were lower in arousal when compared with positive words.</p>
<p>To interpret the N1 and N2 results together, we propose that the impairment in discriminating speech intonation observed among amusic individuals may arise from an inability to access information extracted at early processing stages. This inability, in turn, could reflect some disconnection between low-level and high-level processing. Conflict detection is generally thought to play a pivotal role in cognitive control. Following the detection of a conflict, perceivers presumably increase their attention and make “strategic adjustments in cognitive control” (Botvinick et al.,
<xref rid="B7" ref-type="bibr">2001</xref>
,
<xref rid="B8" ref-type="bibr">2004</xref>
), resulting in reduced interference in subsequent trials (Kerns et al.,
<xref rid="B25" ref-type="bibr">2004</xref>
). Therefore, a deficit in conflict detection can have severe consequences on behavior.</p>
<p>Many of the cognitive and social deficits associated with schizophrenia are believed to arise from impairments in conflict detection and cognitive control (Carter,
<xref rid="B10" ref-type="bibr">1998</xref>
). Typically, the activation of ACC is only affected by conflicting stimuli perceived consciously but not subliminally in normal perceivers, whereas individuals with schizophrenia exhibit impaired conscious but normal subliminal priming (Dehaene et al.,
<xref rid="B11" ref-type="bibr">2003</xref>
). But the situation for amusia is unlike schizophrenia for whom the ACC is considered to be dysfunctional and the conscious control network is affected (Alain et al.,
<xref rid="B1" ref-type="bibr">2002</xref>
; Kerns et al.,
<xref rid="B26" ref-type="bibr">2005</xref>
). If a conflict in pitch cannot even be detected, amusic perceivers would not have an opportunity to become aware of the conflict, even though at a lower processing level, pitch discrimination is intact, as suggested by our N1 findings.</p>
<p>A recent study reported a similar dissociation between lexical tone identification and brainstem encoding of pitch in speech (Liu et al.,
<xref rid="B32" ref-type="bibr">2015</xref>
), which suggests that high-level linguistic pitch processing deficits in amusia operate independently of low-level brainstem functioning. We can only speculate that access to this low-level information is limited in individuals with amusia. Dehaene et al. (
<xref rid="B12" ref-type="bibr">2006</xref>
) have usefully distinguished “accessibility” from “access,” whereby some attended stimuli have the potential to gain access to conscious awareness (accessibility), but they are nonetheless not consciously accessed (access). Thus, it is possible that pitch information processed at an early stage is potentially accessible, but amusic individuals do not have conscious access to that information.</p>
<p>In conclusion, the present investigation provides further evidence that the pitch deficit associated with congenital amusia extends to the domain of language, corroborating the hypothesis that music and language processing share common mechanisms. Speaking a tone language, such Mandarin Chinese, does not compensate for this deficit. However, in daily life, amusic perceivers may make use of other cues, such as linguistic information, to compensate for their impairment. Our results suggest that individuals with amusia are more sensitive to linguistic emotional information than normal participants and that this sensitivity has some influence on early stages of pitch processing (i.e., in the N1 time window). However, emotional modulations appear to be restricted to this early processing stage. At a later processing stage (i.e., in the N2 time window), amusic participants still exhibit impairments in detecting conflicting intonation. We suggest that this impairment stems from an inability to access information extracted at earlier processing stages (e.g., the N1 time window), reflecting a disconnection between low-level and high-level processing in this population. It should be noted that the effect sizes of the findings here are small, owing to the nature of the linguistic stimuli and a low EEG signal-to-noise ratio (20 trials per condition). Future investigations of these questions may benefit from a larger number of trials in each condition to increase the signal-to-noise ratio.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>This study was supported by Postgraduate Innovation Foundation of Science and Technology in Central South University (2013zzts331) and Projects of the National Natural Science Foundation of China (30570609). Manuscript preparation was also supported by a discovery grant by the Australian Research Council (DP130101084) awarded to WFT.</p>
</ack>
<fn-group>
<fn id="fn0001">
<p>
<sup>1</sup>
It should be noted that the positive and negative words used here differ in valence, arousal and familiarity (also see Kanske and Kotz,
<xref rid="B21" ref-type="bibr">2010</xref>
). However, the purpose of this study was to examine the extent to which emotional semantics as a whole are relevant to task performance in an amusic population.</p>
</fn>
</fn-group>
<sec sec-type="supplementary material" id="s5">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at:
<ext-link ext-link-type="uri" xlink:href="http://www.frontiersin.org/journal/10.3389/fpsyg.2015.00385/abstract">http://www.frontiersin.org/journal/10.3389/fpsyg.2015.00385/abstract</ext-link>
</p>
<supplementary-material content-type="local-data" id="SM1">
<media xlink:href="Table1.PDF">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM2">
<media xlink:href="Table2.PDF">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data">
<media xlink:href="Table3.PDF">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alain</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>McNeely</surname>
<given-names>H. E.</given-names>
</name>
<name>
<surname>He</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Christensen</surname>
<given-names>B. K.</given-names>
</name>
<name>
<surname>West</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Neurophysiological evidence of error-monitoring deficits in patients with schizophrenia</article-title>
.
<source>Cereb. Cortex</source>
<volume>12</volume>
,
<fpage>840</fpage>
<lpage>846</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/12.8.840</pub-id>
<pub-id pub-id-type="pmid">12122032</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Albouy</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Mattout</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bouet</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Maby</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Sanchez</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Aguera</surname>
<given-names>P. E.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2013</year>
).
<article-title>Impaired pitch perception and memory in congenital amusia: the deficit starts in the auditory cortex</article-title>
.
<source>Brain</source>
<volume>136</volume>
,
<fpage>1639</fpage>
<lpage>1661</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/awt082</pub-id>
<pub-id pub-id-type="pmid">23616587</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alho</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Teder</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Lavikainen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Näätänen</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Strongly focused attention and auditory event-related potentials</article-title>
.
<source>Biol. Psychol</source>
.
<volume>38</volume>
,
<fpage>73</fpage>
<lpage>90</lpage>
.
<pub-id pub-id-type="doi">10.1016/0301-0511(94)90050-7</pub-id>
<pub-id pub-id-type="pmid">7999931</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Anvari</surname>
<given-names>S. H.</given-names>
</name>
<name>
<surname>Trainor</surname>
<given-names>L. J.</given-names>
</name>
<name>
<surname>Woodside</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Levy</surname>
<given-names>B. A.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Relations among musical skills, phonological processing, and early reading ability in preschool children</article-title>
.
<source>J. Exp. Child. Psychol</source>
.
<volume>83</volume>
,
<fpage>111</fpage>
<lpage>130</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0022-0965(02)00124-8</pub-id>
<pub-id pub-id-type="pmid">12408958</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Astheimer</surname>
<given-names>L. B.</given-names>
</name>
<name>
<surname>Sanders</surname>
<given-names>L. D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Predictability affects early perceptual processing of word onsets in continuous speech</article-title>
.
<source>Neuropsychologia</source>
<volume>49</volume>
,
<fpage>3512</fpage>
<lpage>3516</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.08.014</pub-id>
<pub-id pub-id-type="pmid">21875609</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ayotte</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Hyde</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Congenital amusia: a group study of adults afflicted with a music-specific disorder</article-title>
.
<source>Brain</source>
<volume>125</volume>
,
<fpage>238</fpage>
<lpage>251</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/awf028</pub-id>
<pub-id pub-id-type="pmid">11844725</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Botvinick</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Braver</surname>
<given-names>T. S.</given-names>
</name>
<name>
<surname>Barch</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Carter</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Conflict monitoring and cognitive control</article-title>
.
<source>Psychol. Rev</source>
.
<volume>108</volume>
,
<fpage>624</fpage>
<lpage>652</lpage>
.
<pub-id pub-id-type="doi">10.1037/0033-295X.108.3.624</pub-id>
<pub-id pub-id-type="pmid">11488380</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Botvinick</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Carter</surname>
<given-names>C. S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Conflict monitoring and anterior cingulate cortex: an update</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>8</volume>
,
<fpage>539</fpage>
<lpage>546</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2004.10.003</pub-id>
<pub-id pub-id-type="pmid">15556023</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Botvinick</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nystrom</surname>
<given-names>L. E.</given-names>
</name>
<name>
<surname>Fissell</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Carter</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Conflict monitoring versus selection-for-action in anterior cingulate cortex</article-title>
.
<source>Nature</source>
<volume>402</volume>
,
<fpage>179</fpage>
<lpage>181</lpage>
.
<pub-id pub-id-type="doi">10.1038/46035</pub-id>
<pub-id pub-id-type="pmid">10647008</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carter</surname>
<given-names>C. S.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Anterior cingulate cortex, error detection, and the online monitoring of performance</article-title>
.
<source>Science</source>
<volume>280</volume>
,
<fpage>747</fpage>
<lpage>749</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.280.5364.747</pub-id>
<pub-id pub-id-type="pmid">9563953</pub-id>
</mixed-citation>
</ref>
<ref id="B10a">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Compton</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The interface between emotion and attention: a review of evidence from psychology and neuroscience</article-title>
.
<source>Behav. Cogn. Neurosci. Rev</source>
.
<volume>2</volume>
,
<fpage>115</fpage>
<lpage>129</lpage>
.
<pub-id pub-id-type="doi">10.1177/1534582303002002003</pub-id>
<pub-id pub-id-type="pmid">13678519</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dehaene</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Artiges</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Naccache</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Martelli</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Viard</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Schurhoff</surname>
<given-names>F.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2003</year>
).
<article-title>Conscious and subliminal conflicts in normal subjects and patients with schizophrenia: the role of the anterior cingulate</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>100</volume>
,
<fpage>13722</fpage>
<lpage>13727</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.2235214100</pub-id>
<pub-id pub-id-type="pmid">14597698</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dehaene</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Changeux</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Naccache</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Sackur</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sergent</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Conscious, preconscious, and subliminal processing: a testable taxonomy</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>10</volume>
,
<fpage>204</fpage>
<lpage>211</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2006.03.007</pub-id>
<pub-id pub-id-type="pmid">16603406</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Delorme</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Makeig</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis</article-title>
.
<source>J. Neurosci. Methods</source>
<volume>134</volume>
,
<fpage>9</fpage>
<lpage>21</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.jneumeth.2003.10.009</pub-id>
<pub-id pub-id-type="pmid">15102499</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Folstein</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Van Petten</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Influence of cognitive control and mismatch on the N2 component of the ERP: a review</article-title>
.
<source>Psychophysiology</source>
<volume>45</volume>
,
<fpage>152</fpage>
<lpage>170</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1469-8986.2007.00602.x</pub-id>
<pub-id pub-id-type="pmid">17850238</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fox</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Lester</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Russo</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Bowles</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Pichler</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Dutton</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Facial expressions of emotion: are angry faces detected more efficiently?</article-title>
<source>Cogn. Emot</source>
.
<volume>14</volume>
,
<fpage>61</fpage>
<lpage>92</lpage>
.
<pub-id pub-id-type="doi">10.1080/026999300378996</pub-id>
<pub-id pub-id-type="pmid">17401453</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Giard</surname>
<given-names>M. H.</given-names>
</name>
<name>
<surname>Perrin</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Echallier</surname>
<given-names>J. F.</given-names>
</name>
<name>
<surname>Thevenet</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Froment</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Pernier</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Dissociation of temporal and frontal components in the human auditory N1 wave: a scalp current density and dipole model analysis</article-title>
.
<source>Electroencephalogr. Clin. Neurophysiol</source>
.
<volume>92</volume>
,
<fpage>238</fpage>
<lpage>252</lpage>
.
<pub-id pub-id-type="doi">10.1016/0168-5597(94)90067-1</pub-id>
<pub-id pub-id-type="pmid">7514993</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hansen</surname>
<given-names>C. H.</given-names>
</name>
<name>
<surname>Hansen</surname>
<given-names>R. D.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<article-title>Finding the face in the crowd: an anger superiority effect</article-title>
.
<source>J. Pers. Soc. Psychol</source>
.
<volume>54</volume>
,
<fpage>917</fpage>
<lpage>924</lpage>
.
<pub-id pub-id-type="doi">10.1037/0022-3514.54.6.917</pub-id>
<pub-id pub-id-type="pmid">3397866</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hutchins</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gosselin</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Identification of changes along a continuum of speech intonation is impaired in congenital amusia</article-title>
.
<source>Front. Psychol</source>
.
<volume>1</volume>
:
<issue>236</issue>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2010.00236</pub-id>
<pub-id pub-id-type="pmid">21833290</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Hamm</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Lim</surname>
<given-names>V. K.</given-names>
</name>
<name>
<surname>Kirk</surname>
<given-names>I. J.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Processing melodic contour and speech intonation in congenital amusics with Mandarin Chinese</article-title>
.
<source>Neuropsychologia</source>
<volume>48</volume>
,
<fpage>2630</fpage>
<lpage>2639</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2010.05.009</pub-id>
<pub-id pub-id-type="pmid">20471406</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Hamm</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Lim</surname>
<given-names>V. K.</given-names>
</name>
<name>
<surname>Kirk</surname>
<given-names>I. J.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Impaired categorical perception of lexical tones in Mandarin-speaking congenital amusics</article-title>
.
<source>Mem. Cognit</source>
.
<volume>40</volume>
,
<fpage>1109</fpage>
<lpage>1121</lpage>
.
<pub-id pub-id-type="doi">10.3758/s13421-012-0208-2</pub-id>
<pub-id pub-id-type="pmid">22549878</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kanske</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Modulation of early conflict processing: N200 responses to emotional words in a flanker task</article-title>
.
<source>Neuropsychologia</source>
<volume>48</volume>
,
<fpage>3661</fpage>
<lpage>3664</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2010.07.021</pub-id>
<pub-id pub-id-type="pmid">20654636</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kanske</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2011a</year>
).
<article-title>Positive emotion speeds up conflict processing: ERP responses in an auditory Simon task</article-title>
.
<source>Biol. Psychol</source>
.
<volume>87</volume>
,
<fpage>122</fpage>
<lpage>127</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.biopsycho.2011.02.018</pub-id>
<pub-id pub-id-type="pmid">21382438</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kanske</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2011b</year>
).
<article-title>Conflict processing is modulated by positive emotion: ERP data from a flanker task</article-title>
.
<source>Behav. Brain. Res</source>
.
<volume>219</volume>
,
<fpage>382</fpage>
<lpage>386</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.bbr.2011.01.043</pub-id>
<pub-id pub-id-type="pmid">21295076</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kanske</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2011c</year>
).
<article-title>Emotion speeds up conflict resolution: a new role for the ventral anterior cingulate cortex?</article-title>
<source>Cereb. Cortex</source>
<volume>21</volume>
,
<fpage>911</fpage>
<lpage>919</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhq157</pub-id>
<pub-id pub-id-type="pmid">20732901</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kerns</surname>
<given-names>J. G.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>MacDonald</surname>
<given-names>A. W.</given-names>
</name>
<name>
<surname>Cho</surname>
<given-names>R. Y.</given-names>
</name>
<name>
<surname>Stenger</surname>
<given-names>V. A.</given-names>
</name>
<name>
<surname>Carter</surname>
<given-names>C. S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Anterior cingulate conflict monitoring and adjustments in control</article-title>
.
<source>Science</source>
<volume>303</volume>
,
<fpage>1023</fpage>
<lpage>1026</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.1089910</pub-id>
<pub-id pub-id-type="pmid">14963333</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kerns</surname>
<given-names>J. G.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>MacDonald</surname>
<given-names>I. I. I., A. W.</given-names>
</name>
<name>
<surname>Johnson</surname>
<given-names>M. K.</given-names>
</name>
<name>
<surname>Stenger</surname>
<given-names>V. A.</given-names>
</name>
<name>
<surname>Aizenstein</surname>
<given-names>H.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2005</year>
).
<article-title>Decreased conflict-and error-related activity in the anterior cingulate cortex in subjects with schizophrenia</article-title>
.
<source>Am. J. Psychiatry</source>
<volume>162</volume>
,
<fpage>1833</fpage>
<lpage>1839</lpage>
.
<pub-id pub-id-type="doi">10.1176/appi.ajp.162.10.1833</pub-id>
<pub-id pub-id-type="pmid">16199829</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kissler</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Herbert</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Emotion, Etmnooi, or Emitoon?–Faster lexical access to emotional than to neutral words during reading</article-title>
.
<source>Biol. Psychol</source>
.
<volume>92</volume>
,
<fpage>464</fpage>
<lpage>479</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.biopsycho.2012.09.004</pub-id>
<pub-id pub-id-type="pmid">23059636</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kutas</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Federmeier</surname>
<given-names>K. D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP)</article-title>
.
<source>Annu. Rev. Psychol</source>
.
<volume>62</volume>
,
<fpage>621</fpage>
<lpage>647</lpage>
.
<pub-id pub-id-type="doi">10.1146/annurev.psych.093008.131123</pub-id>
<pub-id pub-id-type="pmid">20809790</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>C. Y.</given-names>
</name>
<name>
<surname>Hung</surname>
<given-names>T. H.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Identification of Mandarin tones by English-speaking musicians and nonmusicians</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>124</volume>
,
<fpage>3235</fpage>
<lpage>3248</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.2990713</pub-id>
<pub-id pub-id-type="pmid">19045807</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Thompson</surname>
<given-names>W. F.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The mechanism of speech processing in congenital amusia: evidence from Mandarin speakers</article-title>
.
<source>PLoS ONE</source>
<volume>7</volume>
:
<fpage>e30374</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0030374</pub-id>
<pub-id pub-id-type="pmid">22347374</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Maggu</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Lau</surname>
<given-names>J. C. Y.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>P. C. M.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Brainstem encoding of speech and musical stimuli in congenital amusia: evidence from Cantonese speakers</article-title>
.
<source>Front. Hum. Neurosci</source>
.
<volume>8</volume>
:
<issue>1029</issue>
.
<pub-id pub-id-type="doi">10.3389/fnhum.2014.01029</pub-id>
<pub-id pub-id-type="pmid">25646077</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Fourcin</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Intonation processing in congenital amusia: discrimination, identification and imitation</article-title>
.
<source>Brain</source>
<volume>133</volume>
,
<fpage>1682</fpage>
<lpage>1693</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/awq089</pub-id>
<pub-id pub-id-type="pmid">20418275</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Luck</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<source>An Introduction to the Event-Related Potential Technique</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Macmillan</surname>
<given-names>N. A.</given-names>
</name>
<name>
<surname>Creelman</surname>
<given-names>C. D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<source>Detection Theory: A User's Guide</source>
.
<publisher-loc>Mahwah, NJ</publisher-loc>
:
<publisher-name>Erlbaum</publisher-name>
.</mixed-citation>
</ref>
<ref id="B35a">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Macmillan</surname>
<given-names>N. A.</given-names>
</name>
<name>
<surname>Kaplan</surname>
<given-names>H. L.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Detection theory analysis of group data: Estimating sensitivity from average hit and false-alarm rates</article-title>
.
<source>Psychol. Bull</source>
.
<volume>98</volume>
,
<fpage>185</fpage>
<lpage>199</lpage>
.
<pub-id pub-id-type="doi">10.1037/0033-2909.98.1.185</pub-id>
<pub-id pub-id-type="pmid">4034817</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mognon</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Jovicich</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bruzzone</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Buiatti</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>ADJUST: an automatic EEG artifact detector based on the joint use of spatial and temporal features</article-title>
.
<source>Psychophysiology</source>
<volume>48</volume>
,
<fpage>229</fpage>
<lpage>240</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1469-8986.2010.01061.x</pub-id>
<pub-id pub-id-type="pmid">20636297</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moreau</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Jolicøeur</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Automatic brain responses to pitch changes in congenital amusia</article-title>
.
<source>Ann. N. Y. Acad. Sci</source>
.
<volume>1169</volume>
,
<fpage>191</fpage>
<lpage>194</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1749-6632.2009.04775.x</pub-id>
<pub-id pub-id-type="pmid">19673779</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morris</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Öhman</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Dolan</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>A subcortical pathway to the right amygdala mediating “unseen” fear</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>96</volume>
,
<fpage>1680</fpage>
<lpage>1685</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.96.4.1680</pub-id>
<pub-id pub-id-type="pmid">9990084</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Musacchia</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Sams</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Skoe</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kraus</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Musicians have enhanced subcortical auditory and audiovisual processing of speech and music</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>104</volume>
,
<fpage>15894</fpage>
<lpage>15898</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.0701498104</pub-id>
<pub-id pub-id-type="pmid">17898180</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Näätänen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Picton</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure</article-title>
.
<source>Psychophysiology</source>
<volume>24</volume>
,
<fpage>375</fpage>
<lpage>425</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1469-8986.1987.tb00311.x</pub-id>
<pub-id pub-id-type="pmid">3615753</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nan</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Congenital amusia in speakers of a tone language: association with lexical tone agnosia</article-title>
.
<source>Brain</source>
<volume>133</volume>
,
<fpage>2635</fpage>
<lpage>2642</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/awq178</pub-id>
<pub-id pub-id-type="pmid">20685803</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nguyen</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Gosselin</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Tonal language processing in congenital amusia</article-title>
.
<source>Ann. N. Y. Acad. Sci</source>
.
<volume>1169</volume>
,
<fpage>490</fpage>
<lpage>493</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1749-6632.2009.04855.x</pub-id>
<pub-id pub-id-type="pmid">19673828</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nieuwenhuis</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Yeung</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Van Den Wildenberg</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Ridderinkhof</surname>
<given-names>K. R.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Electrophysiological correlates of anterior cingulate function in a go/no-go task: effects of response conflict and trial type frequency</article-title>
.
<source>Cogn. Affect. Behav. Neurosci</source>
.
<volume>3</volume>
,
<fpage>17</fpage>
<lpage>26</lpage>
.
<pub-id pub-id-type="doi">10.3758/CABN.3.1.17</pub-id>
<pub-id pub-id-type="pmid">12822595</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Öhman</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Lundqvist</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Esteves</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>The face in the crowd revisited: a threat advantage with schematic stimuli</article-title>
.
<source>J. Pers. Soc. Psychol</source>
.
<volume>80</volume>
,
<fpage>381</fpage>
<lpage>396</lpage>
.
<pub-id pub-id-type="doi">10.1037/0022-3514.80.3.381</pub-id>
<pub-id pub-id-type="pmid">11300573</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ortigue</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Michel</surname>
<given-names>C. M.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Mohr</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Carbonnel</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Landis</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Electrical neuroimaging reveals early generator modulation to emotional words</article-title>
.
<source>NeuroImage</source>
<volume>21</volume>
,
<fpage>1242</fpage>
<lpage>1251</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2003.11.007</pub-id>
<pub-id pub-id-type="pmid">15050552</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Palazova</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mantwill</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Sommer</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Schacht</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Are effects of emotion in single words non-lexical? Evidence from event-related brain potentials</article-title>
.
<source>Neuropsychologia</source>
<volume>49</volume>
,
<fpage>2766</fpage>
<lpage>2775</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.06.005</pub-id>
<pub-id pub-id-type="pmid">21684295</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Language, music, syntax and the brain</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>6</volume>
,
<fpage>674</fpage>
<lpage>681</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn1082</pub-id>
<pub-id pub-id-type="pmid">12830158</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<source>Music, Language and the Brain</source>
.
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Why would musical training benefit the neural encoding of speech? The OPERA hypothesis</article-title>
.
<source>Front. Psychol</source>
.
<volume>2</volume>
:
<issue>142</issue>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00142</pub-id>
<pub-id pub-id-type="pmid">21747773</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Foxton</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Musically tone-deaf individuals have difficulty discriminating intonation contours extracted from speech</article-title>
.
<source>Brain Cogn</source>
.
<volume>59</volume>
,
<fpage>310</fpage>
<lpage>313</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.bandc.2004.10.003</pub-id>
<pub-id pub-id-type="pmid">16337871</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Tramo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Labrecque</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Processing prosodic and musical patterns: a neuropsychological investigation</article-title>
.
<source>Brain Lang</source>
.
<volume>61</volume>
,
<fpage>123</fpage>
<lpage>144</lpage>
.
<pub-id pub-id-type="doi">10.1006/brln.1997.1862</pub-id>
<pub-id pub-id-type="pmid">9448936</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Foxton</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lochy</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Speech intonation perception deficits in musical tone deafness (congenital amusia)</article-title>
.
<source>Music Percept</source>
.
<volume>25</volume>
,
<fpage>357</fpage>
<lpage>368</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2008.25.4.357</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Ayotte</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Mehler</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ahad</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Penhune</surname>
<given-names>V. B.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2002</year>
).
<article-title>Congenital amusia: a disorder of fine-grained pitch discrimination</article-title>
.
<source>Neuron</source>
<volume>33</volume>
,
<fpage>185</fpage>
<lpage>191</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0896-6273(01)00580-3</pub-id>
<pub-id pub-id-type="pmid">11804567</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Brattico</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Jarvenpaa</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The amusic brain: in tune, out of key, and unaware</article-title>
.
<source>Brain</source>
<volume>132</volume>
,
<fpage>1277</fpage>
<lpage>1286</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/awp055</pub-id>
<pub-id pub-id-type="pmid">19336462</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Brattico</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Abnormal electrical brain responses to pitch in congenital amusia</article-title>
.
<source>Ann. Neurol</source>
.
<volume>58</volume>
,
<fpage>478</fpage>
<lpage>482</lpage>
.
<pub-id pub-id-type="doi">10.1002/ana.20606</pub-id>
<pub-id pub-id-type="pmid">16130110</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Champod</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Hyde</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Varieties of musical disorders: the montreal battery of evaluation of amusia</article-title>
.
<source>Ann. N.Y. Acad. Sci</source>
.
<volume>999</volume>
,
<fpage>58</fpage>
<lpage>75</lpage>
.
<pub-id pub-id-type="doi">10.1196/annals.1284.006</pub-id>
<pub-id pub-id-type="pmid">14681118</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pérez</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Harrison</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Neural correlates of attending speech and non-speech: ERPs associated with duplex perception</article-title>
.
<source>J. Neurolinguist</source>
.
<volume>21</volume>
,
<fpage>452</fpage>
<lpage>471</lpage>
<pub-id pub-id-type="doi">10.1016/j.jneuroling.2007.12.001</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pylkkänen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Marantz</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Tracking the time course of word recognition with MEG</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>7</volume>
,
<fpage>187</fpage>
<lpage>189</lpage>
.
<pub-id pub-id-type="doi">10.1016/S1364-6613(03)00092-5</pub-id>
<pub-id pub-id-type="pmid">12757816</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rozin</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Royzman</surname>
<given-names>E. B.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Negativity bias, negativity dominance, and contagion</article-title>
.
<source>Pers. Soc. Psychol</source>
.
<volume>5</volume>
,
<fpage>296</fpage>
<lpage>320</lpage>
<pub-id pub-id-type="doi">10.1207/S15327957PSPR0504_2</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schirmer</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>10</volume>
,
<fpage>24</fpage>
<lpage>30</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2005.11.009</pub-id>
<pub-id pub-id-type="pmid">16321562</pub-id>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scott</surname>
<given-names>G. G.</given-names>
</name>
<name>
<surname>O'Donnell</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Leuthold</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Sereno</surname>
<given-names>S. C.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Early emotion word processing: evidence from event-related potentials</article-title>
.
<source>Biol. Psychol</source>
.
<volume>80</volume>
,
<fpage>95</fpage>
<lpage>104</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.biopsycho.2008.03.010</pub-id>
<pub-id pub-id-type="pmid">18440691</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Selkirk</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Sentence prosody: intonation, stress and phrasing</article-title>
, in
<source>The Handbook of Phonological Theory</source>
, ed
<person-group person-group-type="editor">
<name>
<surname>Goldsmith</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<publisher-loc>Boston</publisher-loc>
:
<publisher-name>Blackwell</publisher-name>
),
<fpage>551</fpage>
<lpage>567</lpage>
.</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Congenital amusia</article-title>
.
<source>Curr. Biol</source>
.
<volume>16</volume>
,
<fpage>R904</fpage>
<lpage>R906</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2006.09.054</pub-id>
<pub-id pub-id-type="pmid">17084682</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Walsh</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Congenital amusia: all the songs sound the same</article-title>
.
<source>Curr. Biol</source>
.
<volume>12</volume>
,
<fpage>R420</fpage>
<lpage>R421</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0960-9822(02)00913-2</pub-id>
<pub-id pub-id-type="pmid">12123591</pub-id>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thompson</surname>
<given-names>W. F.</given-names>
</name>
<name>
<surname>Marin</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Reduced sensitivity to emotional prosody in congenital amusia rekindles the musical protolanguage hypothesis</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>109</volume>
,
<fpage>19027</fpage>
<lpage>19032</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.1210344109</pub-id>
<pub-id pub-id-type="pmid">23112175</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thompson</surname>
<given-names>W. F.</given-names>
</name>
<name>
<surname>Schellenberg</surname>
<given-names>E. G.</given-names>
</name>
<name>
<surname>Husain</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Decoding speech prosody: do music lessons help?</article-title>
<source>Emotion</source>
<volume>4</volume>
,
<fpage>46</fpage>
<lpage>64</lpage>
.
<pub-id pub-id-type="doi">10.1037/1528-3542.4.1.46</pub-id>
<pub-id pub-id-type="pmid">15053726</pub-id>
</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Burnham</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Nguyen</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Grimault</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Gosselin</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Congenital amusia (or tone-deafness) interferes with pitch processing in tone languages</article-title>
.
<source>Front. Psychol</source>
.
<volume>2</volume>
:
<issue>120</issue>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00120</pub-id>
<pub-id pub-id-type="pmid">21734894</pub-id>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Veen</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Carter</surname>
<given-names>C. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>The anterior cingulate as a conflict monitor: fMRI and ERP studies</article-title>
.
<source>Physiol. Behav</source>
.
<volume>77</volume>
,
<fpage>477</fpage>
<lpage>482</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0031-9384(02)00930-7</pub-id>
<pub-id pub-id-type="pmid">12526986</pub-id>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vuilleumier</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>How brains beware: neural mechanisms of emotional attention</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>9</volume>
,
<fpage>585</fpage>
<lpage>594</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2005.10.011</pub-id>
<pub-id pub-id-type="pmid">16289871</pub-id>
</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Widmann</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Schröger</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Filter effects and filter artifacts in the analysis of electrophysiological data</article-title>
.
<source>Front. Psychol</source>
.
<volume>3</volume>
:
<issue>233</issue>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2012.00233</pub-id>
<pub-id pub-id-type="pmid">22787453</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Williams</surname>
<given-names>J. M. G.</given-names>
</name>
<name>
<surname>Mathews</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>MacLeod</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>The emotional Stroop task and psychopathology</article-title>
.
<source>Psychol. Bull</source>
.
<volume>120</volume>
,
<fpage>3</fpage>
<lpage>24</lpage>
.
<pub-id pub-id-type="doi">10.1037/0033-2909.120.1.3</pub-id>
<pub-id pub-id-type="pmid">8711015</pub-id>
</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Woldorff</surname>
<given-names>M. G.</given-names>
</name>
<name>
<surname>Gallen</surname>
<given-names>C. C.</given-names>
</name>
<name>
<surname>Hampson</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Pantev</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Sobel</surname>
<given-names>D.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>1993</year>
).
<article-title>Modulation of early sensory processing in human auditory cortex during auditory selective attention</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>90</volume>
,
<fpage>8722</fpage>
<lpage>8726</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.90.18.8722</pub-id>
<pub-id pub-id-type="pmid">8378354</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wong</surname>
<given-names>P. C.</given-names>
</name>
<name>
<surname>Skoe</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Russo</surname>
<given-names>N. M.</given-names>
</name>
<name>
<surname>Dees</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kraus</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Musical experience shapes human brainstem encoding of linguistic pitch patterns</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>10</volume>
,
<fpage>420</fpage>
<lpage>422</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn1872</pub-id>
<pub-id pub-id-type="pmid">17351633</pub-id>
</mixed-citation>
</ref>
<ref id="B75">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Woods</surname>
<given-names>D. L.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>The component structure of the N 1 wave of the human auditory evoked potential</article-title>
.
<source>Electroencephalogr. Clin. Neurophysiol. Suppl</source>
.
<volume>44</volume>
,
<fpage>102</fpage>
<lpage>109</lpage>
.
<pub-id pub-id-type="pmid">7649012</pub-id>
</mixed-citation>
</ref>
<ref id="B76">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Yin</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Initial establishment of the Chinese Affective Words Categorize System used in research of emotional disorder</article-title>
.
<source>Chin. Mental Health J</source>
.
<volume>22</volume>
,
<fpage>770</fpage>
<lpage>774</lpage>
. [In Chinese].</mixed-citation>
</ref>
<ref id="B77">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yeung</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Botvinick</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The neural basis of error detection: conflict monitoring and the error-related negativity</article-title>
.
<source>Psychol. Rev</source>
.
<volume>111</volume>
,
<fpage>931</fpage>
<lpage>959</lpage>
.
<pub-id pub-id-type="doi">10.1037/0033-295X.111.4.931</pub-id>
<pub-id pub-id-type="pmid">15482068</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/OperaV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000E34 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000E34 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    OperaV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4391227
   |texte=   Intonation processing deficits of emotional words among Mandarin Chinese speakers with congenital amusia: an ERP study
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:25914659" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a OperaV1 

Wicri

This area was generated with Dilib version V0.6.21.
Data generation: Thu Apr 14 14:59:05 2016. Site generation: Thu Jan 4 23:09:23 2024