Serveur d'exploration sur la musique en Sarre

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.
***** Acces problem to record *****\

Identifieur interne : 000146 ( Pmc/Corpus ); précédent : 0001459; suivant : 0001470 ***** probable Xml problem with record *****

Links to Exploration step


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Music and speech prosody: a common rhythm</title>
<author>
<name sortKey="Hausen, Maija" sort="Hausen, Maija" uniqKey="Hausen M" first="Maija" last="Hausen">Maija Hausen</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Torppa, Ritva" sort="Torppa, Ritva" uniqKey="Torppa R" first="Ritva" last="Torppa">Ritva Torppa</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Salmela, Viljami R" sort="Salmela, Viljami R" uniqKey="Salmela V" first="Viljami R." last="Salmela">Viljami R. Salmela</name>
<affiliation>
<nlm:aff id="aff3">
<institution>Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Vainio, Martti" sort="Vainio, Martti" uniqKey="Vainio M" first="Martti" last="Vainio">Martti Vainio</name>
<affiliation>
<nlm:aff id="aff4">
<institution>Department of Speech Sciences, Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="S Rk Mo, Teppo" sort="S Rk Mo, Teppo" uniqKey="S Rk Mo T" first="Teppo" last="S Rk Mö">Teppo S Rk Mö</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24032022</idno>
<idno type="pmc">3759063</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3759063</idno>
<idno type="RBID">PMC:3759063</idno>
<idno type="doi">10.3389/fpsyg.2013.00566</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">000146</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000146</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Music and speech prosody: a common rhythm</title>
<author>
<name sortKey="Hausen, Maija" sort="Hausen, Maija" uniqKey="Hausen M" first="Maija" last="Hausen">Maija Hausen</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Torppa, Ritva" sort="Torppa, Ritva" uniqKey="Torppa R" first="Ritva" last="Torppa">Ritva Torppa</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Salmela, Viljami R" sort="Salmela, Viljami R" uniqKey="Salmela V" first="Viljami R." last="Salmela">Viljami R. Salmela</name>
<affiliation>
<nlm:aff id="aff3">
<institution>Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Vainio, Martti" sort="Vainio, Martti" uniqKey="Vainio M" first="Martti" last="Vainio">Martti Vainio</name>
<affiliation>
<nlm:aff id="aff4">
<institution>Department of Speech Sciences, Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="S Rk Mo, Teppo" sort="S Rk Mo, Teppo" uniqKey="S Rk Mo T" first="Teppo" last="S Rk Mö">Teppo S Rk Mö</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Disorders of music and speech perception, known as amusia and aphasia, have traditionally been regarded as dissociated deficits based on studies of brain damaged patients. This has been taken as evidence that music and speech are perceived by largely separate and independent networks in the brain. However, recent studies of congenital amusia have broadened this view by showing that the deficit is associated with problems in perceiving speech prosody, especially intonation and emotional prosody. In the present study the association between the perception of music and speech prosody was investigated with healthy Finnish adults (
<italic>n</italic>
= 61) using an on-line music perception test including the Scale subtest of Montreal Battery of Evaluation of Amusia (MBEA) and Off-Beat and Out-of-key tasks as well as a prosodic verbal task that measures the perception of word stress. Regression analyses showed that there was a clear association between prosody perception and music perception, especially in the domain of rhythm perception. This association was evident after controlling for music education, age, pitch perception, visuospatial perception, and working memory. Pitch perception was significantly associated with music perception but not with prosody perception. The association between music perception and visuospatial perception (measured using analogous tasks) was less clear. Overall, the pattern of results indicates that there is a robust link between music and speech perception and that this link can be mediated by rhythmic cues (time and stress).</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Abrams, D A" uniqKey="Abrams D">D. A. Abrams</name>
</author>
<author>
<name sortKey="Bhatara, A" uniqKey="Bhatara A">A. Bhatara</name>
</author>
<author>
<name sortKey="Ryali, S" uniqKey="Ryali S">S. Ryali</name>
</author>
<author>
<name sortKey="Balaban, E" uniqKey="Balaban E">E. Balaban</name>
</author>
<author>
<name sortKey="Levitin, D J" uniqKey="Levitin D">D. J. Levitin</name>
</author>
<author>
<name sortKey="Menon, V" uniqKey="Menon V">V. Menon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alcock, J A" uniqKey="Alcock J">J. A. Alcock</name>
</author>
<author>
<name sortKey="Passingham, R E" uniqKey="Passingham R">R. E. Passingham</name>
</author>
<author>
<name sortKey="Watkins, K" uniqKey="Watkins K">K. Watkins</name>
</author>
<author>
<name sortKey="Vargha Khadem, F" uniqKey="Vargha Khadem F">F. Vargha-Khadem</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Basso, A" uniqKey="Basso A">A. Basso</name>
</author>
<author>
<name sortKey="Capitani, E" uniqKey="Capitani E">E. Capitani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
<author>
<name sortKey="Chobert, J" uniqKey="Chobert J">J. Chobert</name>
</author>
<author>
<name sortKey="Marie, C" uniqKey="Marie C">C. Marie</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D. Schön</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bidelman, G M" uniqKey="Bidelman G">G. M. Bidelman</name>
</author>
<author>
<name sortKey="Gandour, J T" uniqKey="Gandour J">J. T. Gandour</name>
</author>
<author>
<name sortKey="Krishnan, A" uniqKey="Krishnan A">A. Krishnan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boersma, P" uniqKey="Boersma P">P. Boersma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brainard, D H" uniqKey="Brainard D">D. H. Brainard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brandt, A" uniqKey="Brandt A">A. Brandt</name>
</author>
<author>
<name sortKey="Gerbian, M" uniqKey="Gerbian M">M. Gerbian</name>
</author>
<author>
<name sortKey="Slevc, L R" uniqKey="Slevc L">L. R. Slevc</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brochard, R" uniqKey="Brochard R">R. Brochard</name>
</author>
<author>
<name sortKey="Dufour, A" uniqKey="Dufour A">A. Dufour</name>
</author>
<author>
<name sortKey="Despres, O" uniqKey="Despres O">O. Després</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cason, N" uniqKey="Cason N">N. Cason</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D. Schön</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chartrand, J P" uniqKey="Chartrand J">J.-P. Chartrand</name>
</author>
<author>
<name sortKey="Belin, P" uniqKey="Belin P">P. Belin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, J L" uniqKey="Chen J">J. L. Chen</name>
</author>
<author>
<name sortKey="Penhune, V B" uniqKey="Penhune V">V. B. Penhune</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chobert, J" uniqKey="Chobert J">J. Chobert</name>
</author>
<author>
<name sortKey="Francois, C" uniqKey="Francois C">C. François</name>
</author>
<author>
<name sortKey="Velay, J L" uniqKey="Velay J">J.-L. Velay</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Corriveau, K" uniqKey="Corriveau K">K. Corriveau</name>
</author>
<author>
<name sortKey="Pasquini, E" uniqKey="Pasquini E">E. Pasquini</name>
</author>
<author>
<name sortKey="Goswami, U" uniqKey="Goswami U">U. Goswami</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Corriveau, K H" uniqKey="Corriveau K">K. H. Corriveau</name>
</author>
<author>
<name sortKey="Goswami, U" uniqKey="Goswami U">U. Goswami</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dalla Bella, S" uniqKey="Dalla Bella S">S. Dalla Bella</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dege, F" uniqKey="Dege F">F. Dege</name>
</author>
<author>
<name sortKey="Schwarzer, G" uniqKey="Schwarzer G">G. Schwarzer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deutsch, D" uniqKey="Deutsch D">D. Deutsch</name>
</author>
<author>
<name sortKey="Henthorn, T" uniqKey="Henthorn T">T. Henthorn</name>
</author>
<author>
<name sortKey="Marvin, E" uniqKey="Marvin E">E. Marvin</name>
</author>
<author>
<name sortKey="Xu, H" uniqKey="Xu H">H. Xu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Di Pietro, M" uniqKey="Di Pietro M">M. Di Pietro</name>
</author>
<author>
<name sortKey="Laganaro, M" uniqKey="Laganaro M">M. Laganaro</name>
</author>
<author>
<name sortKey="Leemann, B" uniqKey="Leemann B">B. Leemann</name>
</author>
<author>
<name sortKey="Schnider, A" uniqKey="Schnider A">A. Schnider</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Douglas, K M" uniqKey="Douglas K">K. M. Douglas</name>
</author>
<author>
<name sortKey="Bilkey, D K" uniqKey="Bilkey D">D. K. Bilkey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Field, D J" uniqKey="Field D">D. J. Field</name>
</author>
<author>
<name sortKey="Hayes, A" uniqKey="Hayes A">A. Hayes</name>
</author>
<author>
<name sortKey="Hess, R F" uniqKey="Hess R">R. F. Hess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Francois, C" uniqKey="Francois C">C. François</name>
</author>
<author>
<name sortKey="Chobert, J" uniqKey="Chobert J">J. Chobert</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D. Schön</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gordon, R L" uniqKey="Gordon R">R. L. Gordon</name>
</author>
<author>
<name sortKey="Magne, C L" uniqKey="Magne C">C. L. Magne</name>
</author>
<author>
<name sortKey="Large, E W" uniqKey="Large E">E. W. Large</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goswami, U" uniqKey="Goswami U">U. Goswami</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goswami, U" uniqKey="Goswami U">U. Goswami</name>
</author>
<author>
<name sortKey="Gerson, D" uniqKey="Gerson D">D. Gerson</name>
</author>
<author>
<name sortKey="Astruc, L" uniqKey="Astruc L">L. Astruc</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grahn, J A" uniqKey="Grahn J">J. A. Grahn</name>
</author>
<author>
<name sortKey="Brett, M" uniqKey="Brett M">M. Brett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
<author>
<name sortKey="Rees, A" uniqKey="Rees A">A. Rees</name>
</author>
<author>
<name sortKey="Witton, C" uniqKey="Witton C">C. Witton</name>
</author>
<author>
<name sortKey="Cross, P M" uniqKey="Cross P">P. M. Cross</name>
</author>
<author>
<name sortKey="Shakir, R A" uniqKey="Shakir R">R. A. Shakir</name>
</author>
<author>
<name sortKey="Green, G G" uniqKey="Green G">G. G. Green</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Houston, D" uniqKey="Houston D">D. Houston</name>
</author>
<author>
<name sortKey="Santelmann, L" uniqKey="Santelmann L">L. Santelmann</name>
</author>
<author>
<name sortKey="Jusczyk, P" uniqKey="Jusczyk P">P. Jusczyk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huss, M" uniqKey="Huss M">M. Huss</name>
</author>
<author>
<name sortKey="Verney, J P" uniqKey="Verney J">J. P. Verney</name>
</author>
<author>
<name sortKey="Fosker, T" uniqKey="Fosker T">T. Fosker</name>
</author>
<author>
<name sortKey="Mead, N" uniqKey="Mead N">N. Mead</name>
</author>
<author>
<name sortKey="Goswami, U" uniqKey="Goswami U">U. Goswami</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hyde, K" uniqKey="Hyde K">K. Hyde</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jiang, C" uniqKey="Jiang C">C. Jiang</name>
</author>
<author>
<name sortKey="Hamm, J P" uniqKey="Hamm J">J. P. Hamm</name>
</author>
<author>
<name sortKey="Lim, V K" uniqKey="Lim V">V. K. Lim</name>
</author>
<author>
<name sortKey="Kirk, I J" uniqKey="Kirk I">I. J. Kirk</name>
</author>
<author>
<name sortKey="Yang, Y" uniqKey="Yang Y">Y. Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jones, L J" uniqKey="Jones L">L. J. Jones</name>
</author>
<author>
<name sortKey="Lucker, J" uniqKey="Lucker J">J. Lucker</name>
</author>
<author>
<name sortKey="Zalewski, C" uniqKey="Zalewski C">C. Zalewski</name>
</author>
<author>
<name sortKey="Brewer, C" uniqKey="Brewer C">C. Brewer</name>
</author>
<author>
<name sortKey="Drayna, D" uniqKey="Drayna D">D. Drayna</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jones, M R" uniqKey="Jones M">M. R. Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jusczyk, P W" uniqKey="Jusczyk P">P. W. Jusczyk</name>
</author>
<author>
<name sortKey="Houston, D M" uniqKey="Houston D">D. M. Houston</name>
</author>
<author>
<name sortKey="Newsome, M" uniqKey="Newsome M">M. Newsome</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Juslin, P N" uniqKey="Juslin P">P. N. Juslin</name>
</author>
<author>
<name sortKey="Laukka, P" uniqKey="Laukka P">P. Laukka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knosche, T R" uniqKey="Knosche T">T. R. Knösche</name>
</author>
<author>
<name sortKey="Neuhaus, C" uniqKey="Neuhaus C">C. Neuhaus</name>
</author>
<author>
<name sortKey="Haueisen, J" uniqKey="Haueisen J">J. Haueisen</name>
</author>
<author>
<name sortKey="Alter, K" uniqKey="Alter K">K. Alter</name>
</author>
<author>
<name sortKey="Maess, B" uniqKey="Maess B">B. Maess</name>
</author>
<author>
<name sortKey="Witte, O W" uniqKey="Witte O">O. W. Witte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kochanski, G" uniqKey="Kochanski G">G. Kochanski</name>
</author>
<author>
<name sortKey="Grabe, E" uniqKey="Grabe E">E. Grabe</name>
</author>
<author>
<name sortKey="Coleman, J" uniqKey="Coleman J">J. Coleman</name>
</author>
<author>
<name sortKey="Rosner, B" uniqKey="Rosner B">B. Rosner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Gunter, T C" uniqKey="Gunter T">T. C. Gunter</name>
</author>
<author>
<name sortKey="Van Cramon, D Y" uniqKey="Van Cramon D">D. Y. van Cramon</name>
</author>
<author>
<name sortKey="Zysset, S" uniqKey="Zysset S">S. Zysset</name>
</author>
<author>
<name sortKey="Lohmann, G" uniqKey="Lohmann G">G. Lohmann</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Kasper, E" uniqKey="Kasper E">E. Kasper</name>
</author>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D. Sammler</name>
</author>
<author>
<name sortKey="Schulze, K" uniqKey="Schulze K">K. Schulze</name>
</author>
<author>
<name sortKey="Gunter, T" uniqKey="Gunter T">T. Gunter</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kotilahti, K" uniqKey="Kotilahti K">K. Kotilahti</name>
</author>
<author>
<name sortKey="Nissil, I" uniqKey="Nissil I">I. Nissilä</name>
</author>
<author>
<name sortKey="N Si, T" uniqKey="N Si T">T. Näsi</name>
</author>
<author>
<name sortKey="Lipi Inen, L" uniqKey="Lipi Inen L">L. Lipiäinen</name>
</author>
<author>
<name sortKey="Noponen, T" uniqKey="Noponen T">T. Noponen</name>
</author>
<author>
<name sortKey="Meril Inen, P" uniqKey="Meril Inen P">P. Meriläinen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kotz, S A" uniqKey="Kotz S">S. A. Kotz</name>
</author>
<author>
<name sortKey="Schwartze, M" uniqKey="Schwartze M">M. Schwartze</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N. Kraus</name>
</author>
<author>
<name sortKey="Chandrasekaran, B" uniqKey="Chandrasekaran B">B. Chandrasekaran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kubanek, J" uniqKey="Kubanek J">J. Kubanek</name>
</author>
<author>
<name sortKey="Brunner, P" uniqKey="Brunner P">P. Brunner</name>
</author>
<author>
<name sortKey="Gunduz, A" uniqKey="Gunduz A">A. Gunduz</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
<author>
<name sortKey="Schalk, G" uniqKey="Schalk G">G. Schalk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lai, C S" uniqKey="Lai C">C. S. Lai</name>
</author>
<author>
<name sortKey="Fisher, S E" uniqKey="Fisher S">S. E. Fisher</name>
</author>
<author>
<name sortKey="Hurst, J A" uniqKey="Hurst J">J. A. Hurst</name>
</author>
<author>
<name sortKey="Vargha Khadem, F" uniqKey="Vargha Khadem F">F. Vargha-Khadem</name>
</author>
<author>
<name sortKey="Monaco, A P" uniqKey="Monaco A">A. P. Monaco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Large, E W" uniqKey="Large E">E. W. Large</name>
</author>
<author>
<name sortKey="Jones, M R" uniqKey="Jones M">M. R. Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liberman, A M" uniqKey="Liberman A">A. M. Liberman</name>
</author>
<author>
<name sortKey="Mattingly, I G" uniqKey="Mattingly I">I. G. Mattingly</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lieberman, P" uniqKey="Lieberman P">P. Lieberman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lima, C F" uniqKey="Lima C">C. F. Lima</name>
</author>
<author>
<name sortKey="Castro, S L" uniqKey="Castro S">S. L. Castro</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, F" uniqKey="Liu F">F. Liu</name>
</author>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Fourcin, A" uniqKey="Fourcin A">A. Fourcin</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Luo, H" uniqKey="Luo H">H. Luo</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Magne, C" uniqKey="Magne C">C. Magne</name>
</author>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D. Schön</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marie, C" uniqKey="Marie C">C. Marie</name>
</author>
<author>
<name sortKey="Delogu, F" uniqKey="Delogu F">F. Delogu</name>
</author>
<author>
<name sortKey="Lampis, G" uniqKey="Lampis G">G. Lampis</name>
</author>
<author>
<name sortKey="Belardinelli, M O" uniqKey="Belardinelli M">M. O. Belardinelli</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marie, C" uniqKey="Marie C">C. Marie</name>
</author>
<author>
<name sortKey="Magne, C" uniqKey="Magne C">C. Magne</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marie, C" uniqKey="Marie C">C. Marie</name>
</author>
<author>
<name sortKey="Kujala, T" uniqKey="Kujala T">T. Kujala</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marques, C" uniqKey="Marques C">C. Marques</name>
</author>
<author>
<name sortKey="Moreno, S" uniqKey="Moreno S">S. Moreno</name>
</author>
<author>
<name sortKey="Castro, S L" uniqKey="Castro S">S. L. Castro</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mendez, M" uniqKey="Mendez M">M. Mendez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Milovanov, R" uniqKey="Milovanov R">R. Milovanov</name>
</author>
<author>
<name sortKey="Huotilainen, M" uniqKey="Huotilainen M">M. Huotilainen</name>
</author>
<author>
<name sortKey="V Lim Ki, V" uniqKey="V Lim Ki V">V. Välimäki</name>
</author>
<author>
<name sortKey="Esquef, P A" uniqKey="Esquef P">P. A. Esquef</name>
</author>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mithen, S" uniqKey="Mithen S">S. Mithen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moreno, S" uniqKey="Moreno S">S. Moreno</name>
</author>
<author>
<name sortKey="Marques, C" uniqKey="Marques C">C. Marques</name>
</author>
<author>
<name sortKey="Santos, A" uniqKey="Santos A">A. Santos</name>
</author>
<author>
<name sortKey="Santos, M" uniqKey="Santos M">M. Santos</name>
</author>
<author>
<name sortKey="Castro, S L" uniqKey="Castro S">S. L. Castro</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morton, J" uniqKey="Morton J">J. Morton</name>
</author>
<author>
<name sortKey="Jassem, W" uniqKey="Jassem W">W. Jassem</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Musacchia, G" uniqKey="Musacchia G">G. Musacchia</name>
</author>
<author>
<name sortKey="Sams, M" uniqKey="Sams M">M. Sams</name>
</author>
<author>
<name sortKey="Skoe, E" uniqKey="Skoe E">E. Skoe</name>
</author>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N. Kraus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Musacchia, G" uniqKey="Musacchia G">G. Musacchia</name>
</author>
<author>
<name sortKey="Strait, D" uniqKey="Strait D">D. Strait</name>
</author>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N. Kraus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nan, Y" uniqKey="Nan Y">Y. Nan</name>
</author>
<author>
<name sortKey="Sun, Y" uniqKey="Sun Y">Y. Sun</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nooteboom, S" uniqKey="Nooteboom S">S. Nooteboom</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="O Halpin, R" uniqKey="O Halpin R">R. O'Halpin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Daniele, J R" uniqKey="Daniele J">J. R. Daniele</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Gibson, E" uniqKey="Gibson E">E. Gibson</name>
</author>
<author>
<name sortKey="Ratner, J" uniqKey="Ratner J">J. Ratner</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
<author>
<name sortKey="Holcomb, P J" uniqKey="Holcomb P">P. J. Holcomb</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Iversen, J R" uniqKey="Iversen J">J. R. Iversen</name>
</author>
<author>
<name sortKey="Wassenaar, M" uniqKey="Wassenaar M">M. Wassenaar</name>
</author>
<author>
<name sortKey="Hagoort, P" uniqKey="Hagoort P">P. Hagoort</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Wong, M" uniqKey="Wong M">M. Wong</name>
</author>
<author>
<name sortKey="Foxton, J" uniqKey="Foxton J">J. Foxton</name>
</author>
<author>
<name sortKey="Lochy, A" uniqKey="Lochy A">A. Lochy</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
<author>
<name sortKey="Foxton, J M" uniqKey="Foxton J">J. M. Foxton</name>
</author>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patston, L L M" uniqKey="Patston L">L. L. M. Patston</name>
</author>
<author>
<name sortKey="Corballis, M C" uniqKey="Corballis M">M. C. Corballis</name>
</author>
<author>
<name sortKey="Hogg, S L" uniqKey="Hogg S">S. L. Hogg</name>
</author>
<author>
<name sortKey="Tippett, L J" uniqKey="Tippett L">L. J. Tippett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Champod, S" uniqKey="Champod S">S. Champod</name>
</author>
<author>
<name sortKey="Hyde, K" uniqKey="Hyde K">K. Hyde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Coltheart, M" uniqKey="Coltheart M">M. Coltheart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N. Gosselin</name>
</author>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
<author>
<name sortKey="Cuddy, L L" uniqKey="Cuddy L">L. L. Cuddy</name>
</author>
<author>
<name sortKey="Gagnon, B" uniqKey="Gagnon B">B. Gagnon</name>
</author>
<author>
<name sortKey="Trimmer, C G" uniqKey="Trimmer C">C. G. Trimmer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Kolinsky, R" uniqKey="Kolinsky R">R. Kolinsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pfordresher, P Q" uniqKey="Pfordresher P">P. Q. Pfordresher</name>
</author>
<author>
<name sortKey="Brown, S" uniqKey="Brown S">S. Brown</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Phillips Silver, J" uniqKey="Phillips Silver J">J. Phillips-Silver</name>
</author>
<author>
<name sortKey="Toiviainen, P" uniqKey="Toiviainen P">P. Toiviainen</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N. Gosselin</name>
</author>
<author>
<name sortKey="Piche, O" uniqKey="Piche O">O. Piché</name>
</author>
<author>
<name sortKey="Nozaradana, S" uniqKey="Nozaradana S">S. Nozaradana</name>
</author>
<author>
<name sortKey="Palmera, C" uniqKey="Palmera C">C. Palmera</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pinker, S" uniqKey="Pinker S">S. Pinker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Racette, A" uniqKey="Racette A">A. Racette</name>
</author>
<author>
<name sortKey="Bard, C" uniqKey="Bard C">C. Bard</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rauschecker, J P" uniqKey="Rauschecker J">J. P. Rauschecker</name>
</author>
<author>
<name sortKey="Scott, S" uniqKey="Scott S">S. Scott</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rogalsky, C" uniqKey="Rogalsky C">C. Rogalsky</name>
</author>
<author>
<name sortKey="Rong, F" uniqKey="Rong F">F. Rong</name>
</author>
<author>
<name sortKey="Saberi, K" uniqKey="Saberi K">K. Saberi</name>
</author>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rusconi, E" uniqKey="Rusconi E">E. Rusconi</name>
</author>
<author>
<name sortKey="Kwan, B" uniqKey="Kwan B">B. Kwan</name>
</author>
<author>
<name sortKey="Giordano, B L" uniqKey="Giordano B">B. L. Giordano</name>
</author>
<author>
<name sortKey="Umilta, C" uniqKey="Umilta C">C. Umilta</name>
</author>
<author>
<name sortKey="Butterworth, B" uniqKey="Butterworth B">B. Butterworth</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saarikallio, S" uniqKey="Saarikallio S">S. Saarikallio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schellenberg, E G" uniqKey="Schellenberg E">E. G. Schellenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schlaug, G" uniqKey="Schlaug G">G. Schlaug</name>
</author>
<author>
<name sortKey="Norton, A" uniqKey="Norton A">A. Norton</name>
</author>
<author>
<name sortKey="Marchina, S" uniqKey="Marchina S">S. Marchina</name>
</author>
<author>
<name sortKey="Zipse, L" uniqKey="Zipse L">L. Zipse</name>
</author>
<author>
<name sortKey="Wan, C Y" uniqKey="Wan C">C. Y. Wan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D. Schön</name>
</author>
<author>
<name sortKey="Gordon, R" uniqKey="Gordon R">R. Gordon</name>
</author>
<author>
<name sortKey="Campagne, A" uniqKey="Campagne A">A. Campagne</name>
</author>
<author>
<name sortKey="Magne, C" uniqKey="Magne C">C. Magne</name>
</author>
<author>
<name sortKey="Astesano, C" uniqKey="Astesano C">C. Astésano</name>
</author>
<author>
<name sortKey="Anton, J L" uniqKey="Anton J">J. L. Anton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schon, D" uniqKey="Schon D">D. Schön</name>
</author>
<author>
<name sortKey="Magne, C" uniqKey="Magne C">C. Magne</name>
</author>
<author>
<name sortKey="Besson, M" uniqKey="Besson M">M. Besson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shahin, A J" uniqKey="Shahin A">A. J. Shahin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Slevc, L R" uniqKey="Slevc L">L. R. Slevc</name>
</author>
<author>
<name sortKey="Miyake, A" uniqKey="Miyake A">A. Miyake</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spinelli, E" uniqKey="Spinelli E">E. Spinelli</name>
</author>
<author>
<name sortKey="Grimault, N" uniqKey="Grimault N">N. Grimault</name>
</author>
<author>
<name sortKey="Meunier, F" uniqKey="Meunier F">F. Meunier</name>
</author>
<author>
<name sortKey="Welby, P" uniqKey="Welby P">P. Welby</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stahl, B" uniqKey="Stahl B">B. Stahl</name>
</author>
<author>
<name sortKey="Kotz, S A" uniqKey="Kotz S">S. A. Kotz</name>
</author>
<author>
<name sortKey="Henseler, I" uniqKey="Henseler I">I. Henseler</name>
</author>
<author>
<name sortKey="Turner, R" uniqKey="Turner R">R. Turner</name>
</author>
<author>
<name sortKey="Geyer, S" uniqKey="Geyer S">S. Geyer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Steinhauer, K" uniqKey="Steinhauer K">K. Steinhauer</name>
</author>
<author>
<name sortKey="Alter, K" uniqKey="Alter K">K. Alter</name>
</author>
<author>
<name sortKey="Friederici, A D" uniqKey="Friederici A">A. D. Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
<author>
<name sortKey="Von Kriegstein, K" uniqKey="Von Kriegstein K">K. von Kriegstein</name>
</author>
<author>
<name sortKey="Warren, J D" uniqKey="Warren J">J. D. Warren</name>
</author>
<author>
<name sortKey="Griffiths, T D" uniqKey="Griffiths T">T. D. Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Suomi, K" uniqKey="Suomi K">K. Suomi</name>
</author>
<author>
<name sortKey="Toivanen, J" uniqKey="Toivanen J">J. Toivanen</name>
</author>
<author>
<name sortKey="Ylitalo, R" uniqKey="Ylitalo R">R. Ylitalo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
<author>
<name sortKey="Hugdahl, K" uniqKey="Hugdahl K">K. Hugdahl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, W F" uniqKey="Thompson W">W. F. Thompson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, W F" uniqKey="Thompson W">W. F. Thompson</name>
</author>
<author>
<name sortKey="Marin, M M" uniqKey="Marin M">M. M. Marin</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, W F" uniqKey="Thompson W">W. F. Thompson</name>
</author>
<author>
<name sortKey="Schellenberg, E G" uniqKey="Schellenberg E">E. G. Schellenberg</name>
</author>
<author>
<name sortKey="Husain, G" uniqKey="Husain G">G. Husain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
<author>
<name sortKey="Burnham, D" uniqKey="Burnham D">D. Burnham</name>
</author>
<author>
<name sortKey="Nguyen, S" uniqKey="Nguyen S">S. Nguyen</name>
</author>
<author>
<name sortKey="Grimault, N" uniqKey="Grimault N">N. Grimault</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N. Gosselin</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
<author>
<name sortKey="Rusconi, E" uniqKey="Rusconi E">E. Rusconi</name>
</author>
<author>
<name sortKey="Traube, C" uniqKey="Traube C">C. Traube</name>
</author>
<author>
<name sortKey="Butterworth, B" uniqKey="Butterworth B">B. Butterworth</name>
</author>
<author>
<name sortKey="Umilta, C" uniqKey="Umilta C">C. Umiltà</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
<author>
<name sortKey="Janata, P" uniqKey="Janata P">P. Janata</name>
</author>
<author>
<name sortKey="Bharucha, J J" uniqKey="Bharucha J">J. J. Bharucha</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B. Tillmann</name>
</author>
<author>
<name sortKey="Jolicoeur, P" uniqKey="Jolicoeur P">P. Jolicoeur</name>
</author>
<author>
<name sortKey="Ishihara, M" uniqKey="Ishihara M">M. Ishihara</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N. Gosselin</name>
</author>
<author>
<name sortKey="Bertrand, O" uniqKey="Bertrand O">O. Bertrand</name>
</author>
<author>
<name sortKey="Rossetti, Y" uniqKey="Rossetti Y">Y. Rossetti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Torppa, R" uniqKey="Torppa R">R. Torppa</name>
</author>
<author>
<name sortKey="Faulkner, A" uniqKey="Faulkner A">A. Faulkner</name>
</author>
<author>
<name sortKey="Vainio, M" uniqKey="Vainio M">M. Vainio</name>
</author>
<author>
<name sortKey="J Rvikivi, J" uniqKey="J Rvikivi J">J. Järvikivi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trehub, S E" uniqKey="Trehub S">S. E. Trehub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vainio, M" uniqKey="Vainio M">M. Vainio</name>
</author>
<author>
<name sortKey="J Rvikivi, J" uniqKey="J Rvikivi J">J. Järvikivi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vogel, I" uniqKey="Vogel I">I. Vogel</name>
</author>
<author>
<name sortKey="Raimy, E" uniqKey="Raimy E">E. Raimy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
<author>
<name sortKey="Tuomainen, J" uniqKey="Tuomainen J">J. Tuomainen</name>
</author>
<author>
<name sortKey="De Gelder, B" uniqKey="De Gelder B">B. de Gelder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wechsler, D" uniqKey="Wechsler D">D. Wechsler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wechsler, D" uniqKey="Wechsler D">D. Wechsler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Williamson, V J" uniqKey="Williamson V">V. J. Williamson</name>
</author>
<author>
<name sortKey="Cocchini, G" uniqKey="Cocchini G">G. Cocchini</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wong, P C" uniqKey="Wong P">P. C. Wong</name>
</author>
<author>
<name sortKey="Skoe, E" uniqKey="Skoe E">E. Skoe</name>
</author>
<author>
<name sortKey="Russo, N M" uniqKey="Russo N">N. M. Russo</name>
</author>
<author>
<name sortKey="Dees, T" uniqKey="Dees T">T. Dees</name>
</author>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N. Kraus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Baum, S R" uniqKey="Baum S">S. R. Baum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Gandour, J T" uniqKey="Gandour J">J. T. Gandour</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24032022</article-id>
<article-id pub-id-type="pmc">3759063</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2013.00566</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Music and speech prosody: a common rhythm</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Hausen</surname>
<given-names>Maija</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Torppa</surname>
<given-names>Ritva</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Salmela</surname>
<given-names>Viljami R.</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Vainio</surname>
<given-names>Martti</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Särkämö</surname>
<given-names>Teppo</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</aff>
<aff id="aff4">
<sup>4</sup>
<institution>Department of Speech Sciences, Institute of Behavioural Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Josef P. Rauschecker, Georgetown University School of Medicine, USA</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Mireille Besson, Centre National de la Recherche Scientifique, France; Barbara Tillmann, Centre National de la Recherche Scientifique, France; Aniruddh Patel, Tufts University, USA</p>
</fn>
<corresp id="fn001">*Correspondence: Maija Hausen, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki, PO Box 9 (Siltavuorenpenger 1 B), FIN-00014 Helsinki, Finland e-mail:
<email xlink:type="simple">maija.s.hausen@gmail.com</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Psychology.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>02</day>
<month>9</month>
<year>2013</year>
</pub-date>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<volume>4</volume>
<elocation-id>566</elocation-id>
<history>
<date date-type="received">
<day>19</day>
<month>2</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>09</day>
<month>8</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2013 Hausen, Torppa, Salmela, Vainio and Särkämö.</copyright-statement>
<copyright-year>2013</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Disorders of music and speech perception, known as amusia and aphasia, have traditionally been regarded as dissociated deficits based on studies of brain damaged patients. This has been taken as evidence that music and speech are perceived by largely separate and independent networks in the brain. However, recent studies of congenital amusia have broadened this view by showing that the deficit is associated with problems in perceiving speech prosody, especially intonation and emotional prosody. In the present study the association between the perception of music and speech prosody was investigated with healthy Finnish adults (
<italic>n</italic>
= 61) using an on-line music perception test including the Scale subtest of Montreal Battery of Evaluation of Amusia (MBEA) and Off-Beat and Out-of-key tasks as well as a prosodic verbal task that measures the perception of word stress. Regression analyses showed that there was a clear association between prosody perception and music perception, especially in the domain of rhythm perception. This association was evident after controlling for music education, age, pitch perception, visuospatial perception, and working memory. Pitch perception was significantly associated with music perception but not with prosody perception. The association between music perception and visuospatial perception (measured using analogous tasks) was less clear. Overall, the pattern of results indicates that there is a robust link between music and speech perception and that this link can be mediated by rhythmic cues (time and stress).</p>
</abstract>
<kwd-group>
<kwd>music perception</kwd>
<kwd>MBEA</kwd>
<kwd>speech prosody perception</kwd>
<kwd>word stress</kwd>
<kwd>visuospatial perception</kwd>
</kwd-group>
<counts>
<fig-count count="6"></fig-count>
<table-count count="8"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="119"></ref-count>
<page-count count="16"></page-count>
<word-count count="12997"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Music and speech have been considered as two aspects of the highly developed human cognition. But how much do they have in common? Evolutionary theories suggest that music and speech may have had a common origin in form of an early communication system based on holistic vocalizations and body gestures (Mithen,
<xref ref-type="bibr" rid="B59">2005</xref>
) and that music may have played a crucial role in social interaction and communication, especially between the mother and the infant (Trehub,
<xref ref-type="bibr" rid="B111">2003</xref>
). Another view holds that the development of music can be understood more as a by-product of other adaptive functions related to, for example, language, and emotion (Pinker,
<xref ref-type="bibr" rid="B85">1997</xref>
). Whether their origins are linked or not, both music and speech are auditory communication systems that utilize similar acoustic cues for many purposes, for example for expressing emotions (Juslin and Laukka,
<xref ref-type="bibr" rid="B36">2003</xref>
). Especially in infant-directed speech, the musical aspects of language (rhythm, timbral contrast, melodic contour) are the central means of communication, and there is novel evidence that newborns show largely overlapping neural activity to infant-directed speech and to instrumental music (Kotilahti et al.,
<xref ref-type="bibr" rid="B41">2010</xref>
). It has been suggested that the musical aspects of language might also be used as scaffolding for the later development of semantic and syntactic aspects of language (Brandt et al.,
<xref ref-type="bibr" rid="B9">2012</xref>
).</p>
<p>In addition to the links that have been found in early development, music and speech seem to be behaviorally and neurally interrelated also later in life. Evidence from functional resonance imaging (fMRI) studies of healthy adults suggests that perceiving music and speech engages at least partly overlapping neural regions, especially in superior, anterior and posterior temporal areas, temporoparietal areas, and inferior frontal areas (Koelsch et al.,
<xref ref-type="bibr" rid="B39">2002</xref>
; Tillmann et al.,
<xref ref-type="bibr" rid="B108">2003</xref>
; Rauschecker and Scott,
<xref ref-type="bibr" rid="B87">2009</xref>
; Schön et al.,
<xref ref-type="bibr" rid="B93">2010</xref>
; Abrams et al.,
<xref ref-type="bibr" rid="B1">2011</xref>
; Rogalsky et al.,
<xref ref-type="bibr" rid="B88">2011</xref>
), including also Broca's and Wernicke's areas in the left hemisphere that were previously thought to be language-specific. Similarly, studies using electroencephalography (EEG) and magnetoencephalography (MEG) have shown that in both speech and music the discrimination of phrases induces similar closure positive shift (CPS) responses (Steinhauer et al.,
<xref ref-type="bibr" rid="B99">1999</xref>
; Knösche et al.,
<xref ref-type="bibr" rid="B37">2005</xref>
) and that syntactic violations in both speech and music elicit similar P600 responses in the brain (Patel et al.,
<xref ref-type="bibr" rid="B71">1998</xref>
). An EEG study of healthy non-musicians also showed that music may induce similar semantic priming effects as words when semantically related or unrelated words are presented visually after hearing music excerpts or spoken sentences (Koelsch et al.,
<xref ref-type="bibr" rid="B40">2004</xref>
).</p>
<p>A clear link between speech and music has also been shown in behavioral and neuroimaging studies of musical training (for a recent review, see Kraus and Chandrasekaran,
<xref ref-type="bibr" rid="B43">2010</xref>
; Shahin,
<xref ref-type="bibr" rid="B95">2011</xref>
). Compared to non-musicians, superior speech processing skills have been found in adult musicians (Schön et al.,
<xref ref-type="bibr" rid="B94">2004</xref>
; Chartrand and Belin,
<xref ref-type="bibr" rid="B12">2006</xref>
; Marques et al.,
<xref ref-type="bibr" rid="B56">2007</xref>
; Lima and Castro,
<xref ref-type="bibr" rid="B49">2011</xref>
; Marie et al.,
<xref ref-type="bibr" rid="B53">2011a</xref>
,
<xref ref-type="bibr" rid="B54">b</xref>
) and musician children (Magne et al.,
<xref ref-type="bibr" rid="B52">2006</xref>
). Also, musical training has been shown to enhance speech-related skills in longitudinal studies where the non-musician participants were randomly assigned to a music training group and a control group (Thompson et al.,
<xref ref-type="bibr" rid="B105">2004</xref>
; Moreno et al.,
<xref ref-type="bibr" rid="B60">2009</xref>
; Dege and Schwarzer,
<xref ref-type="bibr" rid="B18">2011</xref>
; Chobert et al.,
<xref ref-type="bibr" rid="B14">2012</xref>
; François et al.,
<xref ref-type="bibr" rid="B23">2012</xref>
). The superior speech-related skills of musicians or participants in musical training group include the perception of basic acoustic cues in speech, such as pitch (Schön et al.,
<xref ref-type="bibr" rid="B94">2004</xref>
; Magne et al.,
<xref ref-type="bibr" rid="B52">2006</xref>
; Marques et al.,
<xref ref-type="bibr" rid="B56">2007</xref>
; Moreno et al.,
<xref ref-type="bibr" rid="B60">2009</xref>
), timbre (Chartrand and Belin,
<xref ref-type="bibr" rid="B12">2006</xref>
), and vowel duration (Chobert et al.,
<xref ref-type="bibr" rid="B14">2012</xref>
). These results support the hypothesis that music and speech are at least partly based on shared neural resources (Patel,
<xref ref-type="bibr" rid="B68">2008</xref>
,
<xref ref-type="bibr" rid="B69">2012</xref>
). The improved processing of these basic acoustic parameters can also lead to enhanced processing of more complex attributes of speech, which can be taken as evidence of transfer of training effects (Besson et al.,
<xref ref-type="bibr" rid="B4">2011</xref>
). The enhanced higher level processing of speech includes speech segmentation (Dege and Schwarzer,
<xref ref-type="bibr" rid="B18">2011</xref>
; François et al.,
<xref ref-type="bibr" rid="B23">2012</xref>
) and the perception of phonemic structure (Dege and Schwarzer,
<xref ref-type="bibr" rid="B18">2011</xref>
), metric structure (Marie et al.,
<xref ref-type="bibr" rid="B54">2011b</xref>
), segmental and tone variations in a foreign tone-language (Marie et al.,
<xref ref-type="bibr" rid="B53">2011a</xref>
), phonological variations (Slevc and Miyake,
<xref ref-type="bibr" rid="B96">2006</xref>
) and emotional prosody (Thompson et al.,
<xref ref-type="bibr" rid="B105">2004</xref>
; Lima and Castro,
<xref ref-type="bibr" rid="B49">2011</xref>
). Musical ability is also related to enhanced expressive language skills, such as productive phonological ability (Slevc and Miyake,
<xref ref-type="bibr" rid="B96">2006</xref>
) and pronunciation (Milovanov et al.,
<xref ref-type="bibr" rid="B58">2008</xref>
) in a foreign language, as well as reading phonologically complex words in one's native language (Moreno et al.,
<xref ref-type="bibr" rid="B60">2009</xref>
).</p>
<p>The enhanced processing of linguistic sounds is coupled with electrophysiologically measured changes across different auditory processing stages, starting from the brainstem (Musacchia et al.,
<xref ref-type="bibr" rid="B62">2007</xref>
; Wong et al.,
<xref ref-type="bibr" rid="B118">2007</xref>
) and extending to the auditory cortex and other auditory temporal lobe areas (Magne et al.,
<xref ref-type="bibr" rid="B52">2006</xref>
; Musacchia et al.,
<xref ref-type="bibr" rid="B63">2008</xref>
; Moreno et al.,
<xref ref-type="bibr" rid="B60">2009</xref>
; Marie et al.,
<xref ref-type="bibr" rid="B53">2011a</xref>
,
<xref ref-type="bibr" rid="B54">b</xref>
). Years of musical training have been found to correlate with stronger neural activity induced by linguistic sounds at both subcortical and cortical levels (Musacchia et al.,
<xref ref-type="bibr" rid="B63">2008</xref>
). This result and the results of the longitudinal studies (Thompson et al.,
<xref ref-type="bibr" rid="B105">2004</xref>
; Moreno et al.,
<xref ref-type="bibr" rid="B60">2009</xref>
; Dege and Schwarzer,
<xref ref-type="bibr" rid="B18">2011</xref>
; Chobert et al.,
<xref ref-type="bibr" rid="B14">2012</xref>
; François et al.,
<xref ref-type="bibr" rid="B23">2012</xref>
) suggest that the possible transfer effects are more likely results of training rather than genetic predispositions. When studying the possible transfer of training effects of music expertise on speech processing, it is important to consider general cognitive abilities as possible mediators. ERP studies show that attention does not explain the effects, but results regarding memory present a more mixed picture (Besson et al.,
<xref ref-type="bibr" rid="B4">2011</xref>
). A clear correlation between music lessons and general intelligence has been found (Schellenberg,
<xref ref-type="bibr" rid="B91">2006</xref>
), indicating that the transfer effects between music and language can partly be explained by enhanced general cognitive abilities when not controlled.</p>
<p>Conversely, language experience may also have an effect on the development of music perception. For example, speakers of a tone-language (e.g., Chinese) have better abilities in imitating and discriminating musical pitch (Pfordresher and Brown,
<xref ref-type="bibr" rid="B83">2009</xref>
; Bidelman et al.,
<xref ref-type="bibr" rid="B6">2011</xref>
) and they acquire absolute pitch more often than Western speakers (Deutsch et al.,
<xref ref-type="bibr" rid="B19">2006</xref>
). Also, speakers of a quantity language (Finnish) have been found to have similar enhanced processing of duration of non-speech sounds as French musicians, compared to French non-musicians (Marie et al.,
<xref ref-type="bibr" rid="B55">2012</xref>
).</p>
<p>Processing speech and music appear to be linked in the healthy brain, but does the same hold true in the damaged brain? Disorders of music and speech perception/expression, known as amusia and aphasia, have traditionally been regarded as independent, separable deficits based on double dissociations observed in studies of brain damaged patients (amusia without aphasia: Peretz,
<xref ref-type="bibr" rid="B76">1990</xref>
; Peretz and Kolinsky,
<xref ref-type="bibr" rid="B81">1993</xref>
; Griffiths et al.,
<xref ref-type="bibr" rid="B28">1997</xref>
; Dalla Bella and Peretz,
<xref ref-type="bibr" rid="B17">1999</xref>
; aphasia without amusia: Basso and Capitani,
<xref ref-type="bibr" rid="B3">1985</xref>
; Mendez,
<xref ref-type="bibr" rid="B57">2001</xref>
; for a review, see Peretz and Coltheart,
<xref ref-type="bibr" rid="B78">2003</xref>
). However, recent studies suggest that this double dissociation may not be absolute. In Broca's aphasia, problems in the syntactic (structural) processing of language have been shown to be associated with problems in processing structural relations in music (Patel,
<xref ref-type="bibr" rid="B67">2005</xref>
; Patel et al.,
<xref ref-type="bibr" rid="B72">2008a</xref>
). Musical practices are useful also in the rehabilitation of language abilities of patients with non-fluent aphasia (Racette et al.,
<xref ref-type="bibr" rid="B86">2006</xref>
; Schlaug et al.,
<xref ref-type="bibr" rid="B92">2010</xref>
; Stahl et al.,
<xref ref-type="bibr" rid="B98">2011</xref>
), suggesting a further link between the processing of speech and music in the damaged brain. Moreover, persons with congenital amusia have been found to have lower than average abilities in phonemic and phonological awareness (Jones et al.,
<xref ref-type="bibr" rid="B33">2009</xref>
), in the perception of emotional prosody (Thompson,
<xref ref-type="bibr" rid="B103">2007</xref>
; Thompson et al.,
<xref ref-type="bibr" rid="B104">2012</xref>
), speech intonation (Patel et al.,
<xref ref-type="bibr" rid="B74">2005</xref>
,
<xref ref-type="bibr" rid="B73">2008b</xref>
; Jiang et al.,
<xref ref-type="bibr" rid="B32">2010</xref>
; Liu et al.,
<xref ref-type="bibr" rid="B50">2010</xref>
) and subtle pitch variation in speech signals (Tillmann et al.,
<xref ref-type="bibr" rid="B107">2011b</xref>
), and in the discrimination of lexical tones (Nan et al.,
<xref ref-type="bibr" rid="B64">2010</xref>
; Tillmann et al.,
<xref ref-type="bibr" rid="B106">2011a</xref>
). Collectively, these results suggest that amusia may be associated with fine-grained deficits in the processing of speech.</p>
<p>Similar to music, the central elements in speech prosody are melody (intonation) and rhythm (stress and timing) (Nooteboom,
<xref ref-type="bibr" rid="B65">1997</xref>
). Studies of acquired amusia show that the melodic and rhythmic processing of music can be dissociated (Peretz,
<xref ref-type="bibr" rid="B76">1990</xref>
; Peretz and Kolinsky,
<xref ref-type="bibr" rid="B81">1993</xref>
; Di Pietro et al.,
<xref ref-type="bibr" rid="B20">2004</xref>
), suggesting that they may be partly separate functions. Previously, the association between music and speech processing has mainly been found to exist between the perception of the melodic aspect of music and speech (Schön et al.,
<xref ref-type="bibr" rid="B94">2004</xref>
; Patel et al.,
<xref ref-type="bibr" rid="B74">2005</xref>
,
<xref ref-type="bibr" rid="B73">2008b</xref>
; Magne et al.,
<xref ref-type="bibr" rid="B52">2006</xref>
; Marques et al.,
<xref ref-type="bibr" rid="B56">2007</xref>
; Moreno et al.,
<xref ref-type="bibr" rid="B60">2009</xref>
; Jiang et al.,
<xref ref-type="bibr" rid="B32">2010</xref>
; Liu et al.,
<xref ref-type="bibr" rid="B50">2010</xref>
; Nan et al.,
<xref ref-type="bibr" rid="B64">2010</xref>
). However, also rhythm has important functions in both music and speech. Speech is perceived as a sequence of time, and the term speech rhythm is used to refer to the way these events are distributed in time. The patterns of stressed (strong) and unstressed (weak) tones or syllables build up the meter of both music and speech (Jusczyk et al.,
<xref ref-type="bibr" rid="B35">1999</xref>
; for a review, see Cason and Schön,
<xref ref-type="bibr" rid="B11">2012</xref>
). Speech rhythm can be used in segmenting words from fluent speech: the word stress patterns that are typical in one's native language help to detect word boundaries (Vroomen et al.,
<xref ref-type="bibr" rid="B114">1998</xref>
; Houston et al.,
<xref ref-type="bibr" rid="B29">2004</xref>
). Depending on the language, word stress is expressed with changes in fundamental frequency, intensity, and/or duration (Morton and Jassem,
<xref ref-type="bibr" rid="B61">1965</xref>
). Fundamental frequency (f0) is often thought to be a dominant prosodic cue for word stress (Lieberman,
<xref ref-type="bibr" rid="B48">1960</xref>
; Morton and Jassem,
<xref ref-type="bibr" rid="B61">1965</xref>
) and word segmentation (Spinelli et al.,
<xref ref-type="bibr" rid="B97">2010</xref>
)—however, changes in syllable duration and sound intensity are also associated with the prosodic patterns that signal stress (Lieberman,
<xref ref-type="bibr" rid="B48">1960</xref>
; Morton and Jassem,
<xref ref-type="bibr" rid="B61">1965</xref>
). For example, the results from Kochanski et al. (
<xref ref-type="bibr" rid="B38">2005</xref>
) suggest that in English intensity and duration may play even more important role for the detection of syllabic stress than f0. In Finnish, word or lexical stress alone is signaled with durational cues (Suomi et al.,
<xref ref-type="bibr" rid="B101">2003</xref>
), as well as intensity, whereas sentence stress is additionally signaled with fundamental frequency (Vainio and Järvikivi,
<xref ref-type="bibr" rid="B112">2007</xref>
).</p>
<p>Although there are relatively few studies looking at rhythm or meter associating the perception of music and speech, there are some recent findings that support this association. For example, Marie et al. (
<xref ref-type="bibr" rid="B54">2011b</xref>
) found that musicians perceive the metric structure of words more accurately than non-musicians: incongruous syllable lengthenings elicited stronger ERP activations in musicians both automatically and when it was task-relevant. Also, priming with rhythmic tones can enhance the phonological processing of speech (Cason and Schön,
<xref ref-type="bibr" rid="B11">2012</xref>
) and the synchronizing of musical meter and linguistic stress in songs can enhance the processing of both lyrics and musical meter (Gordon et al.,
<xref ref-type="bibr" rid="B24">2011</xref>
).</p>
<p>Another cognitive domain that has recently been linked to music perception is visuospatial processing. A stimulus-response compatibility effect has been found between the pitch (high/low) of auditory stimuli and the location (up/down) of the answer button (Rusconi et al.,
<xref ref-type="bibr" rid="B89">2006</xref>
). There is also evidence that musicians' abilities in visuospatial perception are superior to average (Brochard et al.,
<xref ref-type="bibr" rid="B10">2004</xref>
; Patston et al.,
<xref ref-type="bibr" rid="B75">2006</xref>
). Moreover, congenital amusics have been found to have below average performance in a mental rotation task (Douglas and Bilkey,
<xref ref-type="bibr" rid="B21">2007</xref>
), although this finding has not been replicated (Tillmann et al.,
<xref ref-type="bibr" rid="B109">2010</xref>
). Williamson et al. (
<xref ref-type="bibr" rid="B117">2011</xref>
) found that a subgroup of amusics were slower but as accurate as the control group in the mental rotation task, but did not find any group differences in a range of other visuospatial tasks. Douglas and Bilkey (
<xref ref-type="bibr" rid="B21">2007</xref>
) also found that the stimulus-response compatibility effect was not as strong in amusics as in the control group. In another study, the amusic group reported more problems in visuospatial perception than the control group, but this was not confirmed by any objective measure (Peretz et al.,
<xref ref-type="bibr" rid="B79">2008</xref>
). Taken together, there is some preliminary evidence that visuospatial and musical processing might be linked, but more research is still clearly needed.</p>
<p>The main aim of the present study was to systematically determine the association between music perception (as indicated by a computerized music perception test including the Scale subtest of the Montreal Battery of Evaluation of Amusia, as well as Off-beat and Out-of-key tasks) and the perception of speech prosody, using a large sample of healthy adult subjects (
<italic>N</italic>
= 61). To measure the perception of speech prosody, we used a novel experiment that does not focus only on pitch contour (such as the statement-question sentence tests used in many previous studies) but measures the perception of word stress utilizing a natural combination of fundamental frequency, timing and intensity variations. Thus, this experiment is suitable for assessing the possible connection of perception of both rhythm and pitch in music to prosodic perception. We concentrate on the role of the acoustic differences in the perception of word stress, not the linguistic aspects of this prosodic phenomenon (see, for example, Vogel and Raimy,
<xref ref-type="bibr" rid="B113">2002</xref>
). Second, the study investigated the possible association between visuospatial perception and music perception. Possible confounding variables, including auditory working memory and pitch perception threshold, were controlled for.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Participants</title>
<p>Sixty four healthy Finnish adults were recruited into the study between June and August 2011. The ethical committee of the Faculty of Behavioural Sciences of the University of Helsinki approved the study and the participants gave their written informed consent. Inclusion criteria were age between 19–60 years, self-reported normal hearing and speaking Finnish as first language or at a comparable level (by self-report). Exclusion criteria were being a professional musician and/or having obtained music education at a professional level. From the 64 tested participants, 13 reported having visited an audiologist—one participant was excluded from the analysis because of a deaf ear. However, the other participants who had visited an audiologist had suspected hearing problems that had proved to be either non-existent, transient, or very mild (reported by the participants and controlled by statistical analyses, see section Associations Between the Music Perception Test and Demographical and Musical Background Variables). None of the participants had a cerebral vascular accident or a brain trauma. Another participant was excluded because of weaker than first language level skills in Finnish. One participant was found to perform significantly (>3 SD) below the average total score in the music perception test. In questionnaires, this participant also reported “lacking sense of music” and “being unable to discriminate out-of-key tones,” further suggesting that the participant might have congenital amusia. In order to limit this study to healthy participants with musical abilities in the “normal” range (without musical deficits or professional expertise in music), the data from this participant was excluded from further analysis. Thus, data from 61 participants was used in the statistical analysis. Fifty-eight (95.1%) of the analyzed participants spoke Finnish as their first language and three participants (4.9%) spoke Finnish at a level comparable to first language. Other characteristics of the analyzed participants are shown in Table
<xref ref-type="table" rid="T1">1</xref>
.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Characteristics of the participants</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Male/female</td>
<td align="left" rowspan="1" colspan="1">21/40 (34/66%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Mean age (range)</td>
<td align="left" rowspan="1" colspan="1">39.0 (19–59)</td>
</tr>
<tr>
<td align="left" colspan="2" rowspan="1">Education Level</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Primary level</td>
<td align="left" rowspan="1" colspan="1">0 (0%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Secondary level</td>
<td align="left" rowspan="1" colspan="1">23 (38%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Lowest level tertiary</td>
<td align="left" rowspan="1" colspan="1">6 (10%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Bachelor level</td>
<td align="left" rowspan="1" colspan="1">17 (28%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Master level or higher</td>
<td align="left" rowspan="1" colspan="1">15 (25%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Mean education in years (range)</td>
<td align="left" rowspan="1" colspan="1">17.1 (10–32)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Musical education: no/yes</td>
<td align="left" rowspan="1" colspan="1">19/42 (31/69%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Musical playschool</td>
<td align="left" rowspan="1" colspan="1">4 (7%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Special music class in school</td>
<td align="left" rowspan="1" colspan="1">6 (10%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Private lessons or with parents</td>
<td align="left" rowspan="1" colspan="1">23 (37%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Music institute or conservatory</td>
<td align="left" rowspan="1" colspan="1">13 (21%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Independent music learning</td>
<td align="left" rowspan="1" colspan="1">26 (43%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Mean musical training in years (range)</td>
<td align="left" rowspan="1" colspan="1">3.7 (0–19)</td>
</tr>
<tr>
<td align="left" colspan="2" rowspan="1">Self-reported cognitive problems</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Reading problems</td>
<td align="left" rowspan="1" colspan="1">5 (8%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Speech problems</td>
<td align="left" rowspan="1" colspan="1">3 (5%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Spatial orientation problems</td>
<td align="left" rowspan="1" colspan="1">5 (8%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Problems in maths</td>
<td align="left" rowspan="1" colspan="1">12 (20%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Attentional problems</td>
<td align="left" rowspan="1" colspan="1">5 (8%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Memory problems</td>
<td align="left" rowspan="1" colspan="1">6 (10%)</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Assessment methods</title>
<p>Music, speech prosody, pitch, and visuospatial perception abilities were assessed with computerized tests and working memory was evaluated using a traditional paper-pencil test. The computer was a laptop with display size 12” and headphones. In addition, the participants filled out a paper questionnaire. The place of the testing was arranged individually for each participant: most assessments were done in a quiet work space at a public library. The researcher gave verbal instructions to all tests except the on-line music perception test, in which the participant read the instructions from the laptop screen. The duration of the assessment session was ca. 1.5 h on average, ranging from 1 to 2 h.</p>
<sec>
<title>Music perception</title>
<p>Music perception was measured with an on-line computer-based music perception test including the Scale subtest of the original Montreal Battery of Evaluation of Amusia (MBEA; Peretz et al.,
<xref ref-type="bibr" rid="B77">2003</xref>
) as well as the Off-beat and Out-of-key tasks from the on-line version of the test (Peretz et al.,
<xref ref-type="bibr" rid="B79">2008</xref>
). The on-line version is constructed to measure the same underlying constructs as the MBEA and it has a high correlation with the original MBEA that is administered in laboratory setting (Peretz et al.,
<xref ref-type="bibr" rid="B79">2008</xref>
). The instructions were translated to Finnish and Swedish for the present study. The test used in the present comprised the Scale subtest (Peretz et al.,
<xref ref-type="bibr" rid="B77">2003</xref>
), the Off-beat subtest (Peretz et al.,
<xref ref-type="bibr" rid="B79">2008</xref>
), and the Out-of-key subtest (Peretz et al.,
<xref ref-type="bibr" rid="B79">2008</xref>
) (see
<ext-link ext-link-type="uri" xlink:href="http://www.brams.umontreal.ca/amusia-demo/">http://www.brams.umontreal.ca/amusia-demo/</ext-link>
for a demo in English or French). The test included 30 melodies composed for the MBEA (Peretz et al.,
<xref ref-type="bibr" rid="B77">2003</xref>
) following Western tonal-harmonic conventions. The Scale subtest comprised piano tones while the Off-beat and the Out-of-key subtests used 10 different timbres (e.g., piano, saxophone, and clarinet). In the Scale subtest, the participants were presented with 31 trials, including one “catch” trial that was not included in the statistical analysis. Each trial was a pair of melodies and the task was to judge if the melodies were similar or different. In half (15) of the trials the melodies were the same and in half (15) of the trials the second melody had an out-of-scale tone (on average, 4.3 semitones apart from the original pitch). In the Off-beat and Out-of-key subtests, the subjects were presented with 24 trials of which 12 were normal melodies and 12 were incongruous by having a time delay (Off-beat) or an out-of-scale tone (Out-of-key) on the first downbeat in the third bar of the four-bar melody. In the Off-beat subtest the task was to judge if the melody contained an unusual delay. The 12 incongruous trials had a silence of 5/7 of the beat duration (i.e., 357 ms) prior to a critical tone disrupting the local meter. In the Out-of-key subtest the task was to judge if the melody contained an out-of-tune tone. In the incongruous 12 trials the melody had a 500 ms long tone that was outside the key of the melody, sounding like a “wrong note.” The subtests were always presented in the same order (Scale, Off-beat, Out-of-key) and each subtest began with 2–4 examples of congruous and incongruous trials. The volume level was adjusted individually to a level that was clearly audibly to the subject. In the end the participants filled out an on-line questionnaire about their history and musical background (see Appendix for the questionnaire in English; the participants filled it in Finnish or Swedish). The whole test was completed in 20–30 min.</p>
</sec>
<sec>
<title>Speech prosody (word stress) perception</title>
<p>Speech prosody perception was assessed with a listening experiment that measures the identification of word stress as it is produced to separate a compound word into a phrase of two separate words. The task examines the perception of word and syllabic stress as it is used to signal either word level stress or moderate sentence stress and it is designed so that all prosodic cues, namely f0, intensity, and duration, play a role (O'Halpin,
<xref ref-type="bibr" rid="B66">2010</xref>
). The word stress examined in this study differs from so-called lexical stress, where the stress pattern differentiates the meaning of two phonetically identical words from each other, as well as from the sentence level stress, where a word is accented or emphasized to contrast it with other words in the utterance. The task is designed to measure the perception of syllabic stress at the level which aids in separating words from the surrounding syllables.</p>
<p>The test is originally based on work by Vogel and Raimy (
<xref ref-type="bibr" rid="B113">2002</xref>
) and O'Halpin (
<xref ref-type="bibr" rid="B66">2010</xref>
) and it has been adapted into Finnish by Torppa et al. (
<xref ref-type="bibr" rid="B110">2010</xref>
). Finnish has a fixed stress on the first syllable of a word; thus, a compound word has only one stressed syllable that is accented in an utterance context as opposed to two accents in a similar two word phrase. Typically, the first syllable of the second word of a compound has a secondary stress that differentiates it from a totally unstressed syllable. The materials in the test were spoken with a so called broad focus where (in the case of a phrase) neither of the two words stood out as more emphatic (as is the case in the so called narrow or contrastive focus). The stimuli were analyzed acoustically using Praat (Boersma,
<xref ref-type="bibr" rid="B7">2001</xref>
) with respect to the (potentially) stressed syllables. We measured the raw f0 maxima, intensity maxima as well as the syllable durations and the differences between the values of the two syllables in each utterance were calculated; the results are summarized in Table
<xref ref-type="table" rid="T2">2</xref>
. Table
<xref ref-type="table" rid="T2">2</xref>
shows the differences in f0, intensity, and duration between the first syllable of the first and second word of compound/phrased words and the results of paired
<italic>t</italic>
-tests on the significances of the differences. As shown in Table
<xref ref-type="table" rid="T2">2</xref>
, for duration differences the statistical result did not reach significance—however, the differences between the compound vs. phrased utterances in the duration of the vowel (nucleus) in the second syllable of the second word was significant,
<italic>t</italic>
<sub>(28)</sub>
= −2.45,
<italic>p</italic>
= 0.02. Thus, the compound words were found to differ from the phrases with respect to all prosodic parameters (f0, duration, and intensity) showing that the difference was not produced with any single prosodic parameter. An example of an utterance pair (produced by a 10 year old female child) is shown in Figure
<xref ref-type="fig" rid="F1">1</xref>
. Each figure shows the spectrogram, f0 track, as well as intensity contour of the utterance. The extent of the words in question and the orthoghraphic text are also shown.</p>
<table-wrap id="T2" position="float">
<label>Table 2</label>
<caption>
<p>
<bold>The differences between the cues for word stress in first and second stressed syllables in compound/phrase utterances</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Stimulus</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>
<italic>N</italic>
</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Mean duration difference in ms (sd)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Mean f0 difference in semitones (sd)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Mean intensity difference in dB (sd)</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Compound</td>
<td align="left" rowspan="1" colspan="1">14</td>
<td align="left" rowspan="1" colspan="1">2.0 (69.2)</td>
<td align="left" rowspan="1" colspan="1">9.2 (2.4)</td>
<td align="left" rowspan="1" colspan="1">8.6 (5.7)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Phrase</td>
<td align="left" rowspan="1" colspan="1">16</td>
<td align="left" rowspan="1" colspan="1">−33.3 (98.6)</td>
<td align="left" rowspan="1" colspan="1">4.8 (2.7)</td>
<td align="left" rowspan="1" colspan="1">1.1 (2.6)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<bold>Duration</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>f0</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>Intensity</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">T-test between compounds vs. phrases</td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(28)</sub>
= 1.11,
<italic>p</italic>
= 0.27</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(27)</sub>
= 4.61,
<italic>p</italic>
< 0.001</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(28)</sub>
= 2.93,
<italic>p</italic>
= 0.007</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>The mean differences were calculated as follows (a): Duration: the duration of the first syllable vowel (nucleus) of the first part of the compound minus the duration of the first syllable vowel (nucleus) in the second part of the compound or phrase, i.e., “kIssankEllo” or “kIssan kEllo”, respectively. (b) f0 and intensity: the peak value of the f0/intensity in the first part of the compound minus the peak value of the f0/intensity in second part of the compound/phrase. The f0 differences were calculated in semitones. One f0 value was missing due to creaky voice.</p>
</table-wrap-foot>
</table-wrap>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Example of the spectrum of a compound word (above; audio file
<xref ref-type="supplementary-material" rid="SM3">1</xref>
) and a two-word phrase (audio file
<xref ref-type="supplementary-material" rid="SM4">2</xref>
) with f0 (black line) and intensity (red line) contours.</bold>
The scale is 9–400 Hz for f0 and 0–100 dB for intensity.</p>
</caption>
<graphic xlink:href="fpsyg-04-00566-g0001"></graphic>
</fig>
<p>In each trial, the participants heard an utterance produced with a stress pattern that denoted it either as a compound (e.g., “näytä KISsankello” [′kis:an
<sub></sub>
kel:o] meaning “show the harebell flower” or literally “cat's-bell” in English) or as a phrase comprised from the same two words (e.g., “näytä KISsan KELlo” [′kis:an ′kel:o], meaning “show the cat's bell” in English). A similar pair of utterances in English would be, for example, “BLUEbell” and “BLUE BELL”; [′blu
<sub></sub>
bεl] and [′blu ′bεl], respectively. As the participants heard the utterance (supplementary audio files
<xref ref-type="supplementary-material" rid="SM3">1</xref>
and
<xref ref-type="supplementary-material" rid="SM4">2</xref>
), they were presented two pictures on the screen (see Figure
<xref ref-type="fig" rid="F2">2</xref>
) and the task was to choose which picture matched with the utterance they heard by pressing a button. There were six different pairs of utterances (a compound word and a phrase). The utterances were spoken by four different people: an adult male, an adult female, a female child of 10 years and a female child of 7 years. The original Finnish test version used by Torppa et al. (
<xref ref-type="bibr" rid="B110">2010</xref>
) had 48 trials. For the present study a shorter 30 trial version was made by excluding 18 trials of which 2 were found to be too difficult and 16 too easy for the nine healthy adult participants on a pilot study. The duration of the test was ca 4–5 min. The test was carried out using Presentation ® software (
<ext-link ext-link-type="uri" xlink:href="http://www.neurobs.com">www.neurobs.com</ext-link>
).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Example of the word stress task.</bold>
The left picture represents a compound word “kissankello” and the right picture a phrase “kissan kello.”</p>
</caption>
<graphic xlink:href="fpsyg-04-00566-g0002"></graphic>
</fig>
</sec>
<sec>
<title>Visuospatial perception</title>
<p>Visuospatial perception was assessed by a test that was developed for this study as a visuospatial analogy for the MBEA Scale subtest. The stimuli were created and the test was conducted using Matlab and Psychophysics Toolbox extension (Brainard,
<xref ref-type="bibr" rid="B8">1997</xref>
). In each trial the participants were presented two series of Gabor patches (contrast 75%; spatial frequency ca. 0.8 c/°; size approximately 2°) proceeding from left to right. There was a 500 ms pause between the two series. A single Gabor was presented at a time (there was a 50 ms pause between two Gabors, the duration of each Gabor varied) and the Gabors formed a continuous path. The path was formed by simultaneously changing the position and the orientation of the Gabor relative to the preceding Gabor. The orientation of the Gabor followed the direction of the path. On half of the trials the two Gabor series were identical, on the other half the second path was changed (Figure
<xref ref-type="fig" rid="F3">3</xref>
, Supplementary movie files
<xref ref-type="supplementary-material" rid="SM1">1</xref>
and
<xref ref-type="supplementary-material" rid="SM2">2</xref>
). In change trials the second series had one Gabor that deviated from the expected path (Figure
<xref ref-type="fig" rid="F3">3B</xref>
, supplementary movie file
<xref ref-type="supplementary-material" rid="SM2">2</xref>
). The participants' task was to judge whether the two paths were similar or different. The paths were constructed as analogous to the melodies in the MBEA Scale subtest: each Gabor was analogous to a tone in the melody and each deviating Gabor was analogous to an out-of-scale tone. Every semitone difference in the melody was equivalent to a 12° difference in the Gabor orientation and the corresponding change in Gabor location, except the deviant Gabor that had 22° location change per semitone. The orientation change, 12°, was within the association field of contour integration (Field et al.,
<xref ref-type="bibr" rid="B22">1993</xref>
). Like the MBEA Scale test, the test began with two example trials: one trial with two similar series and one trial with a difference in the second series. The experiment had 30 trials of which 15 contained two similar series and 15 contained a deviant figure in the second series. In a pilot study with 11 participants, the type (location, orientation, both) and the size (4–22°) of the deviant Gabor change were varied. From the different types and sizes the deviant change (location, 22°) was chosen to match the level of difficulty of the MBEA Scale test (Peretz et al.,
<xref ref-type="bibr" rid="B77">2003</xref>
; norms updated in 2008). The duration of the test was ca 10 min.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Example of the visuospatial task with the original sequence of Gabor figures (A) and a sequence with a change in the location and orientation of one of the Gabor figures (B).</bold>
Note that in the actual test, only a single Gabor was presented at a time.</p>
</caption>
<graphic xlink:href="fpsyg-04-00566-g0003"></graphic>
</fig>
</sec>
<sec>
<title>Pitch perception</title>
<p>The pitch perception test was a shortened adaptation of the test used by Hyde and Peretz (
<xref ref-type="bibr" rid="B31">2004</xref>
) and it was carried out using Presentation ® software (
<ext-link ext-link-type="uri" xlink:href="http://www.neurobs.com">www.neurobs.com</ext-link>
). In every trial the subjects heard a sequence of five successive tones and their task was to judge if all five tones were similar or if there was a change in pitch. The duration of a tone was always 100 ms and the intertone interval (ITI; onset to onset) was 350 ms. In the standard sequence, all tones were played at the pitch level of C6 (1047 Hz) and in the sequences that contained a change, the fourth tone was altered. The altered tones were 1/16, 1/8, 1/4, 1/2, or 1 semitones (3, 7, 15, 30 or 62 Hz) upward or downward from C6. The different change sizes and changes upward and downward were presented as many times. The order of the trials was randomized. The test contained 80 trials: 40 standard sequences and 40 sequences with the fourth tone altered in pitch. Three example trials are presented in Supplementary files: a standard trial with no change (supplementary audio file
<xref ref-type="supplementary-material" rid="SM5">3</xref>
) and two change trials (1 semitone upwards; audio file
<xref ref-type="supplementary-material" rid="SM6">4</xref>
and downwards; audio file
<xref ref-type="supplementary-material" rid="SM7">5</xref>
). The test was substantially shorter than the test by Hyde and Peretz (
<xref ref-type="bibr" rid="B31">2004</xref>
). It also contained smaller pitch changes because the difficulty level was set to match the participants who were not recruited because of having problems in the perception of music. The duration of the test was ca 3–4 min.</p>
</sec>
<sec>
<title>Auditory working memory</title>
<p>Auditory working memory and attention span were measured with the Digit Span subtest of the Wechsler Adult Intelligence Scale III (WAIS-III; Wechsler,
<xref ref-type="bibr" rid="B115">1997</xref>
). In the first part of the test, the participants hear a sequence of numbers read by the researcher and their task is to repeat the numbers in the same order. In the second part the task is to repeat the number sequence in reverse order. The test proceeds from the shortest sequences (two numbers) to the longer ones (max. nine numbers in the first and eight numbers in the second part of the test). Every sequence that the participant repeats correctly is scored as one point and the maximum total score is 30. The duration of the test was ca 5 min.</p>
</sec>
<sec>
<title>Questionnaires</title>
<p>The subjects filled out two questionnaires: a computerized questionnaire after the music perception test (same as in Peretz et al.,
<xref ref-type="bibr" rid="B79">2008</xref>
) as well as a paper questionnaire at the end of the assessment session. In the questionnaires the participants were asked about their musical and general educational background; cognitive problems; musical abilities, hobbies, and preferences (see Appendix: Data Sheet 1). The last part of the paper questionnaire was the Brief Music in Mood Regulation -scale (Saarikallio,
<xref ref-type="bibr" rid="B90">2012</xref>
). The links between music perception, different kinds of musical hobbies and music in mood regulation will be presented elsewhere in more detail: in the present study, only questions regarding first language, cognitive problems, years of musical and general education, and education level, were analyzed.</p>
</sec>
</sec>
<sec>
<title>Statistical analysis</title>
<p>The associations between the MBEA scores and background variables were first examined using
<italic>t</italic>
-tests, ANOVAs, and Pearson correlation coefficients depending on the variable type. The variables that had significant associations with the music perception scores were then included in further analysis. Pitch perception and auditory working memory were also regarded as possible confounding variables and controlled for when examining the associations that word stress and visuospatial perception had with music perception. Linear step-wise regression analyses were performed to see how much the different variables could explain the variation of the music perception total score and subtest scores. All statistical analyses were performed using PASW Statistics 18.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Descriptive statistics of the MBEA and other tests</title>
<p>Table
<xref ref-type="table" rid="T3">3</xref>
presents the ranges, means, and standard deviations of the music perception scores. Total music perception scores were calculated as the mean averaged score across the three subtests. Discrimination (d′) and response bias [ln(β)] indexes for the subtests were also calculated. The analysis of d′ yielded highly similar associations to other variables as the proportion of correct answers (hit rate + correct rejections) and hence only the latter is reported. There was no significant correlation between response bias and proportion of correct answers in the music perception total score [
<italic>r</italic>
<sub>(59)</sub>
= 0.18,
<italic>p</italic>
= 0.17]. There was a small response bias toward “congruous” responses in the Off-beat [
<italic>t</italic>
<sub>(60)</sub>
= −15.23,
<italic>p</italic>
< 0.001] and Out-of-key subtests [
<italic>t</italic>
<sub>(60)</sub>
= −5.07,
<italic>p</italic>
< 0.001], and in the total score [
<italic>t</italic>
<sub>(60)</sub>
= −4.68,
<italic>p</italic>
< 0.001], but not in Scale subtest [
<italic>t</italic>
<sub>(60)</sub>
= 1.66,
<italic>p</italic>
= 0.10]. Based on visual examination, the subtest scores and the total music perception scores were approximately normally distributed (Figure
<xref ref-type="fig" rid="F4">4</xref>
). Figure
<xref ref-type="fig" rid="F5">5</xref>
shows the associations between the three music perception subtests. The Scale and the Out-of-key subtests were significantly correlated [
<italic>r</italic>
<sub>(59)</sub>
= 0.51,
<italic>p</italic>
< 0.001], whereas Off-beat did not correlate significantly with the other subtests [correlation to Scale
<italic>r</italic>
<sub>(59)</sub>
= 0.13,
<italic>p</italic>
= 0.33 and Out-of-key
<italic>r</italic>
<sub>(59)</sub>
= 0.18,
<italic>p</italic>
= 0.16].</p>
<table-wrap id="T3" position="float">
<label>Table 3</label>
<caption>
<p>
<bold>Basic descriptive statistics of the music perception test</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<bold>Range</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Mean</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Standard deviation</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Scale</td>
<td align="left" rowspan="1" colspan="1">19–30 (63.3–100%)</td>
<td align="left" rowspan="1" colspan="1">25.0 (83.4%)</td>
<td align="left" rowspan="1" colspan="1">3.2 (10.5%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Off-beat</td>
<td align="left" rowspan="1" colspan="1">16–23 (66.7–95.8%)</td>
<td align="left" rowspan="1" colspan="1">19.8 (82.4%)</td>
<td align="left" rowspan="1" colspan="1">2.4 (10.1%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Out-of-key</td>
<td align="left" rowspan="1" colspan="1">15–24 (63.0–100%)</td>
<td align="left" rowspan="1" colspan="1">20.3 (84.6%)</td>
<td align="left" rowspan="1" colspan="1">3.3 (13.7%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Total</td>
<td align="left" rowspan="1" colspan="1">55–74 (70.5–94.9%)</td>
<td align="left" rowspan="1" colspan="1">65.1 (83.5%)</td>
<td align="left" rowspan="1" colspan="1">6.5 (8.3%)</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Distributions of the music perception subtest and total scores</bold>
.</p>
</caption>
<graphic xlink:href="fpsyg-04-00566-g0004"></graphic>
</fig>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Scatter plots indicating the relationships between the three music perception subtests</bold>
.</p>
</caption>
<graphic xlink:href="fpsyg-04-00566-g0005"></graphic>
</fig>
<p>Table
<xref ref-type="table" rid="T4">4</xref>
shows the ranges, means, and standard distributions of the other tests. Based on visual examination, the scores were approximately normally distributed in all tests. The average performance levels in the word stress (83%) and the visuospatial perception (79%) tasks were almost identical to the average level of performance in the music perception test (84%). The performance in the auditory working memory task was close to the average level in the Finnish population (Wechsler,
<xref ref-type="bibr" rid="B116">2005</xref>
). In the pitch perception task the largest changes (62 Hz; one semitone) were noticed by all of the participants with 100% accuracy while the smallest changes (3 and 7 Hz) were not noticed at all by some of the participants. Pitch discrimination threshold was calculated as the size of the pitch change that the participant detected with 75% probability.</p>
<table-wrap id="T4" position="float">
<label>Table 4</label>
<caption>
<p>
<bold>Other tests of perception and memory: basic descriptive statistics</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<bold>Range</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Mean</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Standard deviation</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Speech prosody perception</td>
<td align="left" rowspan="1" colspan="1">19–30 (63–100%)</td>
<td align="left" rowspan="1" colspan="1">25.0 (83%)</td>
<td align="left" rowspan="1" colspan="1">2.7 (9%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Visuospatial perception</td>
<td align="left" rowspan="1" colspan="1">17–30 (57–100%)</td>
<td align="left" rowspan="1" colspan="1">23.8 (79%)</td>
<td align="left" rowspan="1" colspan="1">2.9 (10%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">10–22 (33–73%)</td>
<td align="left" rowspan="1" colspan="1">15.8 (53%)</td>
<td align="left" rowspan="1" colspan="1">3.0 (10%)</td>
</tr>
<tr>
<td align="left" colspan="4" rowspan="1">Pitch perception</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    No change trials</td>
<td align="left" rowspan="1" colspan="1">13–40 (33–100%)</td>
<td align="left" rowspan="1" colspan="1">32.4 (81%)</td>
<td align="left" rowspan="1" colspan="1">6.4 (16%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Change trials</td>
<td align="left" rowspan="1" colspan="1">22–38 (55–95%)</td>
<td align="left" rowspan="1" colspan="1">30.3 (76%)</td>
<td align="left" rowspan="1" colspan="1">4.4 (11%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">      3 Hz change (1/16 semitone)</td>
<td align="left" rowspan="1" colspan="1">0–7 (0-88%)</td>
<td align="left" rowspan="1" colspan="1">2.7 (34%)</td>
<td align="left" rowspan="1" colspan="1">2.1 (26%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">      7 Hz change (1/8 semitone)</td>
<td align="left" rowspan="1" colspan="1">0–8 (0–100%)</td>
<td align="left" rowspan="1" colspan="1">4.5 (57%)</td>
<td align="left" rowspan="1" colspan="1">2.0 (25%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">      15 Hz change (1/4 semitone)</td>
<td align="left" rowspan="1" colspan="1">4–8 (50–100%)</td>
<td align="left" rowspan="1" colspan="1">7.1 (89%)</td>
<td align="left" rowspan="1" colspan="1">0.9 (12%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">      36 Hz change (1/2 semitone)</td>
<td align="left" rowspan="1" colspan="1">6–8 (75–100%)</td>
<td align="left" rowspan="1" colspan="1">7.8 (98%)</td>
<td align="left" rowspan="1" colspan="1">0.4 (0%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">      62 Hz change (1 semitone)</td>
<td align="left" rowspan="1" colspan="1">8 (100%)</td>
<td align="left" rowspan="1" colspan="1">8 (100%)</td>
<td align="left" rowspan="1" colspan="1">0 (0%)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Pitch discrimination threshold (Hz)</td>
<td align="left" rowspan="1" colspan="1">3.0–26.1</td>
<td align="left" rowspan="1" colspan="1">9.9</td>
<td align="left" rowspan="1" colspan="1">4.7</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Associations between the music perception test and demographical and musical background variables</title>
<p>Gender, first language, self-reported cognitive problems, and self-reported suspected or mild hearing problems were not significantly associated with the music perception total score or any of the subtests (
<italic>p</italic>
> 0.05 in all
<italic>t</italic>
-tests). First language was also not significantly associated with the word stress perception,
<italic>t</italic>
<sub>(59)</sub>
= −1.08,
<italic>p</italic>
= 0.29. Suspected or mild hearing problems were neither significantly associated with the pitch discrimination threshold [
<italic>t</italic>
<sub>(59)</sub>
= 0.52,
<italic>p</italic>
= 0.61] or word stress perception [
<italic>t</italic>
<sub>(59)</sub>
= 0.55,
<italic>p</italic>
= 0.59]. The associations to the music perception total score are shown in Table
<xref ref-type="table" rid="T5">5</xref>
. However, owing the relatively small number of the self-reported cognitive problems, possible associations cannot be reliably ruled out for most problems.</p>
<table-wrap id="T5" position="float">
<label>Table 5</label>
<caption>
<p>
<bold>Background variables' associations with the music perception total score</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Background variable</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>
<italic>N</italic>
</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Mean music perception scores (%)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Significance of the difference</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Gender: female/male</td>
<td align="left" rowspan="1" colspan="1">40/21</td>
<td align="left" rowspan="1" colspan="1">84/82</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(59)</sub>
= 0.96,
<italic>p</italic>
= 0.34</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">First language: Finnish/Swedish</td>
<td align="left" rowspan="1" colspan="1">58/3</td>
<td align="left" rowspan="1" colspan="1">84/81</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(59)</sub>
= 0.73,
<italic>p</italic>
= 0.47</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Self-reported cognitive problems</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Problems in reading: yes/no</td>
<td align="left" rowspan="1" colspan="1">5/53</td>
<td align="left" rowspan="1" colspan="1">83/84</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(56)</sub>
= −0.16,
<italic>p</italic>
= 0.87</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Attention problems: yes/no</td>
<td align="left" rowspan="1" colspan="1">5/53</td>
<td align="left" rowspan="1" colspan="1">83/84</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(56)</sub>
= −0.30,
<italic>p</italic>
= 0.76</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Problems in speech: yes/no</td>
<td align="left" rowspan="1" colspan="1">3/55</td>
<td align="left" rowspan="1" colspan="1">79/84</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(57)</sub>
= −1.40,
<italic>p</italic>
= 0.17</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Problems in mathematics: yes/no</td>
<td align="left" rowspan="1" colspan="1">12/45</td>
<td align="left" rowspan="1" colspan="1">83/84</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(55)</sub>
= −0.69,
<italic>p</italic>
= 0.49</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Memory problems: yes/no</td>
<td align="left" rowspan="1" colspan="1">6/51</td>
<td align="left" rowspan="1" colspan="1">85/84</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(55)</sub>
= 0.43,
<italic>p</italic>
= 0.67</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Problems in visuospatial orientation: yes/no</td>
<td align="left" rowspan="1" colspan="1">5/52</td>
<td align="left" rowspan="1" colspan="1">82/84</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(55)</sub>
= −0.79,
<italic>p</italic>
= 0.43</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Suspected or mild hearing problems: yes/no</td>
<td align="left" rowspan="1" colspan="1">12/49</td>
<td align="left" rowspan="1" colspan="1">81/84</td>
<td align="left" rowspan="1" colspan="1">
<italic>t</italic>
<sub>(60)</sub>
= −1.49,
<italic>p</italic>
= 0.14</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Age was not linearly correlated with the music perception total score [
<italic>r</italic>
<sub>(59)</sub>
= 0.03,
<italic>p</italic>
= 0.79], but when the age groups were compared to each other using ANOVA, a significant association was found [
<italic>F</italic>
<sub>(3, 57)</sub>
= 6.21,
<italic>p</italic>
= 0.001]. The music perception score seemed to rise until the age group of 40–49 years but the age group of 50–59 years had the lowest scores.
<italic>Post hoc</italic>
test (Tukey HSD) showed that the age group 40–49 years had significantly higher music perception scores than the groups 19–29 years (
<italic>p</italic>
= 0.004) and 50–59 years (
<italic>p</italic>
= 0.002). The average music perception scores of the age groups are shown in Table
<xref ref-type="table" rid="T6">6</xref>
.</p>
<table-wrap id="T6" position="float">
<label>Table 6</label>
<caption>
<p>
<bold>Average music perception scores of the age groups</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Age group (years)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>
<italic>N</italic>
</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Music perception score mean (sd) (%)</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">19–29</td>
<td align="left" rowspan="1" colspan="1">17</td>
<td align="left" rowspan="1" colspan="1">81.2 (6.2)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">30–39</td>
<td align="left" rowspan="1" colspan="1">14</td>
<td align="left" rowspan="1" colspan="1">85.0 (6.8)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">40–49</td>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="left" rowspan="1" colspan="1">89.0 (3.2)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">50–59</td>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="left" rowspan="1" colspan="1">80.7 (6.5)</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Level of education did not differentiate the participants regarding their music perception scores [
<italic>F</italic>
<sub>(3, 57)</sub>
= 1.81,
<italic>p</italic>
= 0.16] and neither were education years significantly correlated with music perception [
<italic>r</italic>
<sub>(56)</sub>
= 0.10,
<italic>p</italic>
= 0.46]. The participants who had got some kind of music education in addition to the compulsory music lessons in school (
<italic>N</italic>
= 42) had higher music perception scores than those who only had got the compulsory lessons (
<italic>N</italic>
= 19) [
<italic>t</italic>
<sub>(59)</sub>
= 2.75,
<italic>p</italic>
= 0.008]. The difference was 4.7% on average. The correlation between years of music education (0–19) and the total music perception score was significant [
<italic>r</italic>
<sub>(51)</sub>
= 0.32,
<italic>p</italic>
= 0.019].</p>
</sec>
<sec>
<title>Associations between music perception, word stress perception and visuospatial perception</title>
<p>Table
<xref ref-type="table" rid="T7">7</xref>
shows the correlations between the possible confounding variables (pitch perception, auditory working memory, music education, and general education) and word stress, visuospatial perception, and music perceptions</p>
<table-wrap id="T7" position="float">
<label>Table 7</label>
<caption>
<p>
<bold>Correlations between speech prosody and visuospatial perception, music perception and possible confounding variables</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<bold>Word stress</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Visuospatial</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Music perception (total)</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Pitch perception: change trials (
<italic>df</italic>
= 59)</td>
<td align="left" rowspan="1" colspan="1">0.01</td>
<td align="left" rowspan="1" colspan="1">0.05</td>
<td align="left" rowspan="1" colspan="1">0.31
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    No change trials (
<italic>df</italic>
= 59)</td>
<td align="left" rowspan="1" colspan="1">−0.06</td>
<td align="left" rowspan="1" colspan="1">0.07</td>
<td align="left" rowspan="1" colspan="1">−0.15</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    All trials (
<italic>df</italic>
= 59)</td>
<td align="left" rowspan="1" colspan="1">−0.08</td>
<td align="left" rowspan="1" colspan="1">0.14</td>
<td align="left" rowspan="1" colspan="1">0.09</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold (
<italic>df</italic>
= 59)</td>
<td align="left" rowspan="1" colspan="1">−0.13</td>
<td align="left" rowspan="1" colspan="1">−0.03</td>
<td align="left" rowspan="1" colspan="1">−0.32
<xref ref-type="table-fn" rid="TN1">
<sup>**</sup>
</xref>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Auditory working memory (
<italic>df</italic>
= 59)</td>
<td align="left" rowspan="1" colspan="1">0.26
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Digit span forwards (
<italic>df</italic>
= 59)</td>
<td align="left" rowspan="1" colspan="1">0.26
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.07</td>
<td align="left" rowspan="1" colspan="1">0.07</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">    Digin span backwards (
<italic>df</italic>
= 59)</td>
<td align="left" rowspan="1" colspan="1">0.13</td>
<td align="left" rowspan="1" colspan="1">0.11</td>
<td align="left" rowspan="1" colspan="1">0.06</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Music education (years) (
<italic>df</italic>
= 51)</td>
<td align="left" rowspan="1" colspan="1">0.12</td>
<td align="left" rowspan="1" colspan="1">0.02</td>
<td align="left" rowspan="1" colspan="1">0.32
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">General education (years) (
<italic>df</italic>
= 56)</td>
<td align="left" rowspan="1" colspan="1">0.08</td>
<td align="left" rowspan="1" colspan="1">−0.11</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="TN1">
<label>**</label>
<p>p < 0.01;</p>
</fn>
<fn id="TN2">
<label>*</label>
<p>p < 0.05.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>Step-wise regression analyses were performed to see how much the different variables could explain the variation of the music perception total score and subtests. Four different models of predictors were examined: first the possibly confounding background variables, then the possibly confounding variables measured by tests and lastly the test scores that were the main interest of this study. In the first model, age group (under/over 50 years) and music education (no/yes) were used as predictors. These background variables were included in the regression analysis because they were significantly associated with the music perception scores. Second, pitch discrimination threshold and auditory working memory score were added to the model. Third, visuospatial perception score was added as a predictor. Finally, the word stress score was added to the model. Table
<xref ref-type="table" rid="T8">8</xref>
shows the regression analyses including the coefficients of determination (
<italic>R</italic>
<sup>2</sup>
) of the different models. As can be seen from the
<italic>R</italic>
<sup>2</sup>
change in the regression analysis for the total music perception score, both visuospatial perception and word stress perception explained about 8% of the variation of the total music perception score while controlling for music education, age, auditory working memory and pitch discrimination threshold. Music education and pitch discrimination threshold were also significant predictors.</p>
<table-wrap id="T8" position="float">
<label>Table 8</label>
<caption>
<p>
<bold>Regression analysis</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Model</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Variable</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Beta</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>
<italic>T</italic>
</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>
<italic>F</italic>
(
<italic>df</italic>
)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>
<italic>R</italic>
<sup>2</sup>
</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>
<italic>R</italic>
<sup>2</sup>
change</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" colspan="7" rowspan="1">
<bold>MUSIC PERCEPTION TOTAL SCORE</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(2, 58)</sub>
= 5.21
<xref ref-type="table-fn" rid="TN4">
<sup>**</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.15</td>
<td align="left" rowspan="1" colspan="1">0.15</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.28</td>
<td align="left" rowspan="1" colspan="1">2.27
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.20</td>
<td align="left" rowspan="1" colspan="1">−1.62</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(4, 56)</sub>
= 3.77
<xref ref-type="table-fn" rid="TN4">
<sup>**</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.21</td>
<td align="left" rowspan="1" colspan="1">0.06</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.28</td>
<td align="left" rowspan="1" colspan="1">2.25
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.14</td>
<td align="left" rowspan="1" colspan="1">−1.07</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">−0.01</td>
<td align="left" rowspan="1" colspan="1">−0.05</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.25</td>
<td align="left" rowspan="1" colspan="1">−2.07
<xref ref-type="table-fn" rid="TN4">
<sup>**</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(5, 55)</sub>
= 4.43
<xref ref-type="table-fn" rid="TN4">
<sup>**</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.29</td>
<td align="left" rowspan="1" colspan="1">0.08</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.28</td>
<td align="left" rowspan="1" colspan="1">2.37
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.08</td>
<td align="left" rowspan="1" colspan="1">−0.61</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">−0.02</td>
<td align="left" rowspan="1" colspan="1">−0.19</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.26</td>
<td align="left" rowspan="1" colspan="1">−2.23
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Visuospatial perception</td>
<td align="left" rowspan="1" colspan="1">0.28</td>
<td align="left" rowspan="1" colspan="1">2.40
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(6, 54)</sub>
= 5.27
<xref ref-type="table-fn" rid="TN3">
<sup>***</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.37</td>
<td align="left" rowspan="1" colspan="1">0.08</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.28</td>
<td align="left" rowspan="1" colspan="1">2.47
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.09</td>
<td align="left" rowspan="1" colspan="1">−0.74</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">−0.10</td>
<td align="left" rowspan="1" colspan="1">−0.85</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.23</td>
<td align="left" rowspan="1" colspan="1">−2.03
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Visuospatial perception</td>
<td align="left" rowspan="1" colspan="1">0.27</td>
<td align="left" rowspan="1" colspan="1">2.42
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Word stress perception</td>
<td align="left" rowspan="1" colspan="1">0.30</td>
<td align="left" rowspan="1" colspan="1">2.65
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" colspan="7" rowspan="1">
<bold>SCALE SUBTEST</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(2, 58)</sub>
= 3.67
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.11</td>
<td align="left" rowspan="1" colspan="1">0.11</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.15</td>
<td align="left" rowspan="1" colspan="1">1.20</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.26</td>
<td align="left" rowspan="1" colspan="1">−2.00
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(4, 56)</sub>
= 1.78</td>
<td align="left" rowspan="1" colspan="1">0.11</td>
<td align="left" rowspan="1" colspan="1">0.00</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.15</td>
<td align="left" rowspan="1" colspan="1">1.15</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.25</td>
<td align="left" rowspan="1" colspan="1">−1.79
<xref ref-type="table-fn" rid="TN6">
<sup>+</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">−0.01</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.04</td>
<td align="left" rowspan="1" colspan="1">−0.30</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(5, 55)</sub>
= 2.05
<xref ref-type="table-fn" rid="TN6">
<sup>+</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.16</td>
<td align="left" rowspan="1" colspan="1">0.05</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.15</td>
<td align="left" rowspan="1" colspan="1">1.19</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.20</td>
<td align="left" rowspan="1" colspan="1">−1.44</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">0.00</td>
<td align="left" rowspan="1" colspan="1">0.01</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.05</td>
<td align="left" rowspan="1" colspan="1">−0.36</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Visuospatial perception</td>
<td align="left" rowspan="1" colspan="1">0.22</td>
<td align="left" rowspan="1" colspan="1">1.71
<xref ref-type="table-fn" rid="TN6">
<sup>+</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(6, 54)</sub>
= 1.85</td>
<td align="left" rowspan="1" colspan="1">0.17</td>
<td align="left" rowspan="1" colspan="1">0.01</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.15</td>
<td align="left" rowspan="1" colspan="1">1.18</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.20</td>
<td align="left" rowspan="1" colspan="1">−1.47</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">−0.03</td>
<td align="left" rowspan="1" colspan="1">−0.22</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.03</td>
<td align="left" rowspan="1" colspan="1">−0.25</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Visuospatial perception</td>
<td align="left" rowspan="1" colspan="1">0.21</td>
<td align="left" rowspan="1" colspan="1">1.67</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Word stress perception</td>
<td align="left" rowspan="1" colspan="1">0.12</td>
<td align="left" rowspan="1" colspan="1">0.91</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" colspan="7" rowspan="1">
<bold>OUT-OF-KEY SUBTEST</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(2, 58)</sub>
= 3.35
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.31</td>
<td align="left" rowspan="1" colspan="1">2.40
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.04</td>
<td align="left" rowspan="1" colspan="1">−0.31</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(4, 56)</sub>
= 2.77
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.17</td>
<td align="left" rowspan="1" colspan="1">0.06</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.32</td>
<td align="left" rowspan="1" colspan="1">2.51
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.01</td>
<td align="left" rowspan="1" colspan="1">−0.04</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">−0.12</td>
<td align="left" rowspan="1" colspan="1">−0.98</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.23</td>
<td align="left" rowspan="1" colspan="1">−1.82
<xref ref-type="table-fn" rid="TN6">
<sup>+</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(5, 55)</sub>
= 2.55
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.19</td>
<td align="left" rowspan="1" colspan="1">0.02</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.32</td>
<td align="left" rowspan="1" colspan="1">2.54
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">0.03</td>
<td align="left" rowspan="1" colspan="1">0.21</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">−0.13</td>
<td align="left" rowspan="1" colspan="1">−1.06</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.24</td>
<td align="left" rowspan="1" colspan="1">−1.87
<xref ref-type="table-fn" rid="TN6">
<sup>+</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Visuospatial perception</td>
<td align="left" rowspan="1" colspan="1">0.15</td>
<td align="left" rowspan="1" colspan="1">1.23</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(6, 54)</sub>
= 2.87
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.24</td>
<td align="left" rowspan="1" colspan="1">0.05</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.32</td>
<td align="left" rowspan="1" colspan="1">2.58
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">0.00</td>
<td align="left" rowspan="1" colspan="1">0.00</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">−0.20</td>
<td align="left" rowspan="1" colspan="1">−1.53</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.21</td>
<td align="left" rowspan="1" colspan="1">−1.68
<xref ref-type="table-fn" rid="TN6">
<sup>+</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Visuospatial perception</td>
<td align="left" rowspan="1" colspan="1">0.14</td>
<td align="left" rowspan="1" colspan="1">1.18</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Word stress perception</td>
<td align="left" rowspan="1" colspan="1">0.24</td>
<td align="left" rowspan="1" colspan="1">1.96
<xref ref-type="table-fn" rid="TN6">
<sup>+</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" colspan="7" rowspan="1">
<bold>OFF-BEAT SUBTEST</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(2, 58)</sub>
= 1.67</td>
<td align="left" rowspan="1" colspan="1">0.05</td>
<td align="left" rowspan="1" colspan="1">0.05</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.14</td>
<td align="left" rowspan="1" colspan="1">1.07</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.15</td>
<td align="left" rowspan="1" colspan="1">−1.15</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(4, 56)</sub>
= 2.83
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.17</td>
<td align="left" rowspan="1" colspan="1">0.11</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.12</td>
<td align="left" rowspan="1" colspan="1">0.90</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">−0.04</td>
<td align="left" rowspan="1" colspan="1">−0.33</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">0.13</td>
<td align="left" rowspan="1" colspan="1">1.06</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.32</td>
<td align="left" rowspan="1" colspan="1">−2.52
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(5, 55)</sub>
= 3.33
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.23</td>
<td align="left" rowspan="1" colspan="1">0.06</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.11</td>
<td align="left" rowspan="1" colspan="1">0.96</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">0.01</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">0.12</td>
<td align="left" rowspan="1" colspan="1">0.97</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.33</td>
<td align="left" rowspan="1" colspan="1">−2.67
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Visuospatial perception</td>
<td align="left" rowspan="1" colspan="1">0.26</td>
<td align="left" rowspan="1" colspan="1">2.15
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">
<italic>F</italic>
<sub>(6, 54)</sub>
= 4.34
<xref ref-type="table-fn" rid="TN4">
<sup>**</sup>
</xref>
</td>
<td align="left" rowspan="1" colspan="1">0.33</td>
<td align="left" rowspan="1" colspan="1">0.09</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Music education</td>
<td align="left" rowspan="1" colspan="1">0.12</td>
<td align="left" rowspan="1" colspan="1">1.18</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Age group</td>
<td align="left" rowspan="1" colspan="1">0.00</td>
<td align="left" rowspan="1" colspan="1">0.00</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Auditory working memory</td>
<td align="left" rowspan="1" colspan="1">0.04</td>
<td align="left" rowspan="1" colspan="1">0.32</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pitch discrimination threshold</td>
<td align="left" rowspan="1" colspan="1">−0.29</td>
<td align="left" rowspan="1" colspan="1">−2.48
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Visuospatial perception</td>
<td align="left" rowspan="1" colspan="1">0.25</td>
<td align="left" rowspan="1" colspan="1">2.15
<xref ref-type="table-fn" rid="TN5">
<sup>*</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Word stress perception</td>
<td align="left" rowspan="1" colspan="1">0.32</td>
<td align="left" rowspan="1" colspan="1">2.73
<xref ref-type="table-fn" rid="TN4">
<sup>**</sup>
</xref>
</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="TN3">
<label>***</label>
<p>p < 0.001;</p>
</fn>
<fn id="TN4">
<label>**</label>
<p>p < 0.01;</p>
</fn>
<fn id="TN5">
<label>*</label>
<p>p < 0.05;</p>
</fn>
<fn id="TN6">
<label>+</label>
<p>p < 0.10.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>When the Scale subtest was analyzed separately, age group was a significant predictor in the first model, but the further regression models were not significant. Visuospatial perception had only a marginally significant association with the Scale subtest that was analogous with it. The final regression model for the Out-of-key subtest was significant and explained 24% of the variance. The most significant predictor was music education. In the regression analysis on the Off-beat subtest, the final model was significant and explained 33% of the variance. The most significant predictor was word stress perception that alone explained 9% of the variance. Figure
<xref ref-type="fig" rid="F6">6</xref>
shows that word stress perception correlated highly significantly with the music perception total score [
<italic>r</italic>
<sub>(59)</sub>
= 0.34,
<italic>p</italic>
= 0.007], and with the Off-beat score [
<italic>r</italic>
<sub>(59)</sub>
= 0.39,
<italic>p</italic>
= 0.002], but not with the Scale and Out-of-key scores.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Scatter plots indicating the relationships between the word stress task and the music perception test</bold>
.</p>
</caption>
<graphic xlink:href="fpsyg-04-00566-g0006"></graphic>
</fig>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>The most important finding of this study is the association found between the perception of music and speech prosody, more specifically word stress. Auditory working memory, pitch perception abilities, or background variables like musical education did not explain this association. This finding gives support to the hypothesis that processing music and speech are in some extent based on shared neural resources. The association is found in “normal,” healthy population and thus strengthens the generalizability of the associations previously found in musicians and those having problems in the perception of music or language.</p>
<p>The most powerful background variable influencing the music perception was music education. Age was also found to be related to music perception, as also Peretz et al. (
<xref ref-type="bibr" rid="B79">2008</xref>
) found, but in the present study the association was not linear. Older persons' lower performance in the music perception test might be partly explained by less music education—however, this does not explain the finding that the youngest age group also had lower than average performance. However, the relation between age group and music perception was not very strong, as age group was not a significant predictor in the regression analysis including other more strongly related variables. Gender, general education, and self-reported cognitive problems were not associated with the music perception scores. Music education and age group (under/over 50 years) were controlled for in the statistical analysis and did not affect the associations that were the main findings of this study. Auditory working memory was significantly associated only with the word stress task and did not explain any of the relations that were found.</p>
<sec>
<title>Association between music and speech prosody</title>
<p>Patel (
<xref ref-type="bibr" rid="B69">2012</xref>
) argues that the apparent contradiction between the dissociation between speech and music perception found in brain damage studies (Peretz,
<xref ref-type="bibr" rid="B76">1990</xref>
; Peretz and Kolinsky,
<xref ref-type="bibr" rid="B81">1993</xref>
; Griffiths et al.,
<xref ref-type="bibr" rid="B28">1997</xref>
; Dalla Bella and Peretz,
<xref ref-type="bibr" rid="B17">1999</xref>
) and the associations found in brain imaging studies (Patel et al.,
<xref ref-type="bibr" rid="B71">1998</xref>
; Steinhauer et al.,
<xref ref-type="bibr" rid="B99">1999</xref>
; Koelsch et al.,
<xref ref-type="bibr" rid="B39">2002</xref>
; Tillmann et al.,
<xref ref-type="bibr" rid="B108">2003</xref>
; Knösche et al.,
<xref ref-type="bibr" rid="B37">2005</xref>
; Schön et al.,
<xref ref-type="bibr" rid="B93">2010</xref>
; Abrams et al.,
<xref ref-type="bibr" rid="B1">2011</xref>
; Rogalsky et al.,
<xref ref-type="bibr" rid="B88">2011</xref>
) may be explained by a
<italic>resource sharing framework</italic>
. According to this framework, music and speech have separate representations in long-term memory, and damage to these representations may lead to a specific deficit of musical or linguistic cognition. However, in the normal brain, music and language also share neural resources in similar cognitive operations. In the introduction we also pointed out that the enhanced abilities in music and speech may be based on transfer of training (Besson et al.,
<xref ref-type="bibr" rid="B4">2011</xref>
),—however, as the association found in this study was significant after controlling for musical training, our results may be best interpreted as support for the hypothesis of shared neural resources.</p>
<p>The most important difference between the neural basis of processing speech and music is that at least in most right-handed persons, music is processed dominantly in the right hemisphere while speech is dominantly processed in the left hemisphere (Tervaniemi and Hugdahl,
<xref ref-type="bibr" rid="B102">2003</xref>
; Zatorre and Gandour,
<xref ref-type="bibr" rid="B120">2008</xref>
). Zatorre and Gandour (
<xref ref-type="bibr" rid="B120">2008</xref>
) suggest that this difference may not indicate an abstract difference between the domains of speech and music, but it could be explained by the acoustic differences: the left hemisphere is specialized in processing fast temporal acoustic changes that are important for speech perception while fine-grained pitch changes that are important for music perception are more precisely processed by the right hemisphere. However, although the temporal changes that are central in the processing of musical rhythm are relatively slower than those typical in speech, temporal processing is important in processing both speech and musical rhythm. It is thus worth investigating if the processing of musical rhythm may have even more close associations to speech processing than the melodic aspect of music.</p>
<sec>
<title>Rhythm as a new link</title>
<p>Shared neural mechanisms have thus far been found especially concerning pitch perception in music and speech—congenital amusia has been found to be associated with problems in perceiving speech intonation (Patel et al.,
<xref ref-type="bibr" rid="B74">2005</xref>
,
<xref ref-type="bibr" rid="B73">2008b</xref>
; Jiang et al.,
<xref ref-type="bibr" rid="B32">2010</xref>
; Liu et al.,
<xref ref-type="bibr" rid="B50">2010</xref>
), emotional prosody (Thompson,
<xref ref-type="bibr" rid="B103">2007</xref>
; Thompson et al.,
<xref ref-type="bibr" rid="B104">2012</xref>
) and the discrimination of lexical tones (Nan et al.,
<xref ref-type="bibr" rid="B64">2010</xref>
). It seems likely that the processing of coarse-grained pitch in music and speech rely on a shared mechanism (Zatorre and Baum,
<xref ref-type="bibr" rid="B119">2012</xref>
). However, in this study the aim was to find out if music and speech perception may be associated not only regarding the melodic but also the rhythmic aspect. Indeed, the strongest association that the word stress test had with music perception was with the Off-beat subtest that measures the ability to perceive unusual time delays in music. This gives support to the hypothesis that the perception of rhythm in music and speech are connected. Word stress perception was not a significant predictor for performance in the Scale subtest, and for performance in the Out-of-key subtest it was only a marginally significant predictor. The marginally significant relation with the Out-of-key subtest, which measures the ability to notice melodically deviant tones, suggests that melodic cues affect word stress perception in some extent, but as the pitch perception threshold was not associated with the word stress perception scores, it seems likely that the word stress test does not rely on fine-grained pitch perception. This finding suggests that pitch is not the only auditory cue relating the perception of music and speech—the relation can also be found via rhythm perception.</p>
<p>Although previous research has focused more on studying the melodic than the rhythmic aspect of music and speech, there are studies showing that rhythm might actually be a strong link between the two domains. For instance, musicians have been found to perceive the metric structure of words more precisely than non-musicians (Marie et al.,
<xref ref-type="bibr" rid="B54">2011b</xref>
). Also, classical music composed by French and English composers differ in their use of rhythm—the musical rhythm is associated with the rhythm of speech in French and English language (Patel and Daniele,
<xref ref-type="bibr" rid="B70">2003</xref>
). In a study that investigated the effects of melodic and rhythmic cues in non-fluent aphasics' speech production, the results suggest that rhythm may actually be more crucial than melody (Stahl et al.,
<xref ref-type="bibr" rid="B98">2011</xref>
). Also, children with reading or language problems have been found to have auditory difficulties in rhythmic processing—deficits in phonological processing are associated with problems in the auditory processing of amplitude rise time that is critical for rhythmic timing in both language and music (Corriveau et al.,
<xref ref-type="bibr" rid="B15">2007</xref>
; Corriveau and Goswami,
<xref ref-type="bibr" rid="B16">2009</xref>
; Goswami et al.,
<xref ref-type="bibr" rid="B26">2010</xref>
; Goswami,
<xref ref-type="bibr" rid="B25">2012</xref>
). Musical metrical sensitivity has been found to predict phonological awareness and reading development and it has been suggested that dyslexia may have its roots in a temporal processing deficit (Huss et al.,
<xref ref-type="bibr" rid="B30">2011</xref>
). Moreover, the members of the KE family, who suffer from a genetic developmental disorder in speech and language have been found to have problems also in the perception and production of rhythm (Alcock et al.,
<xref ref-type="bibr" rid="B2">2000</xref>
). The affected family members have both expressive and receptive language difficulties and the disorder has been connected to a mutation in the FOXP2 gene that has been considered language-specific (Lai et al.,
<xref ref-type="bibr" rid="B45">2001</xref>
). Alcock et al. (
<xref ref-type="bibr" rid="B2">2000</xref>
) studied the musical abilities of the affected members and found that their perception and production of pitch did not differ from the control group while their rhythmic abilities were significantly lower, suggesting that the disorder might be based on the difficulties of temporal processing. However, as Besson and Schön (
<xref ref-type="bibr" rid="B5">2012</xref>
) point out, genetics can only provide limited information about the modularity of music and language.</p>
<p>Neuroimaging and neuropsychological studies show that musical rhythm is processed more evenly in both hemispheres whereas the melodic aspect is more clearly lateralized to the right hemisphere, at least in most right-handed persons (Peretz and Zatorre,
<xref ref-type="bibr" rid="B82">2005</xref>
). Because language is more left-lateralized, it is reasonable to hypothesize that it may have shared mechanisms with musical rhythm. Also, perception and production of speech and music have found to be neurally connected (Rauschecker and Scott,
<xref ref-type="bibr" rid="B87">2009</xref>
)—especially when processing rhythm: motor regions like the cerebellum, the basal ganglia, the supplementary motor area and the premotor cortex are central in the processing of both musical rhythm (Grahn and Brett,
<xref ref-type="bibr" rid="B27">2007</xref>
; Chen et al.,
<xref ref-type="bibr" rid="B13">2008</xref>
) and speech rhythm (Kotz and Schwartze,
<xref ref-type="bibr" rid="B42">2010</xref>
). Stewart et al. (
<xref ref-type="bibr" rid="B100">2006</xref>
) proposed that the importance of motor regions give support to a “motor theory” of rhythm perception, as a parallel to the motor theory of speech perception (Liberman and Mattingly,
<xref ref-type="bibr" rid="B47">1985</xref>
)—the perception of rhythm might be based on the motor mechanisms required for its production. Similar developmental mechanisms for speech and music might explain why training in the other modality causes improvement in the other (Patel,
<xref ref-type="bibr" rid="B68">2008</xref>
). The association between the perception of rhythm in speech and music can also be related to dynamic attention theory proposing that the allocation of attention depends on synchronization between internal oscillations and external temporal structure (Jones,
<xref ref-type="bibr" rid="B34">1976</xref>
; Large and Jones,
<xref ref-type="bibr" rid="B46">1999</xref>
). Recent neuroimaging studies have found evidence that attending to and predictive coding of specific time scales is indeed important in speech perception (Kotz and Schwartze,
<xref ref-type="bibr" rid="B42">2010</xref>
; Luo and Poeppel,
<xref ref-type="bibr" rid="B51">2012</xref>
). Kubanek et al. (
<xref ref-type="bibr" rid="B44">2013</xref>
) found that the temporal envelope of speech that is critical for understanding speech is robustly tracked in belt areas at the early stage of the auditory pathway, and the same areas are activated also when processing the temporal envelope of non-speech sounds.</p>
<p>Studies on acquired (Peretz,
<xref ref-type="bibr" rid="B76">1990</xref>
; Peretz and Kolinsky,
<xref ref-type="bibr" rid="B81">1993</xref>
; Di Pietro et al.,
<xref ref-type="bibr" rid="B20">2004</xref>
) and congenital amusia (Hyde and Peretz,
<xref ref-type="bibr" rid="B31">2004</xref>
; Thompson,
<xref ref-type="bibr" rid="B103">2007</xref>
; Peretz et al.,
<xref ref-type="bibr" rid="B79">2008</xref>
; Phillips-Silver et al.,
<xref ref-type="bibr" rid="B84">2011</xref>
) have found double dissociations between the deficits of melody and rhythm perception in music. Even though congenital amusia is considered to be mainly a deficit of fine-grained pitch perception (Hyde and Peretz,
<xref ref-type="bibr" rid="B31">2004</xref>
), there are cases of congenital amusia in which the main problem lies in rhythm perception (Peretz et al.,
<xref ref-type="bibr" rid="B77">2003</xref>
; Thompson,
<xref ref-type="bibr" rid="B103">2007</xref>
; Phillips-Silver et al.,
<xref ref-type="bibr" rid="B84">2011</xref>
). It has been proposed that there are different types of congenital amusia and that speech perception deficits associated with congenital amusia might only concern a subgroup of amusics (Patel et al.,
<xref ref-type="bibr" rid="B73">2008b</xref>
) whereas some might actually have a specific deficit in rhythm perception (Thompson,
<xref ref-type="bibr" rid="B103">2007</xref>
; Phillips-Silver et al.,
<xref ref-type="bibr" rid="B84">2011</xref>
). In the present study, the Off-beat subtest that measures the perception of rhythmic deviations was not significantly associated with the other music perception subtests measuring the perception of melodic deviations, which further strengthens the hypothesis that rhythm and melody perception can be independent.</p>
<p>Taken together, we found evidence that the perception of speech prosody could be associated with the perception of music via the perception of rhythm, and that the perception of rhythm and melody are separable. This raises the question of whether the type of the acoustic properties (melody or rhythm) of the stimuli under focus might sometimes orient the perception process more than the category (speech or music). This hypothesis is in line with Patel (
<xref ref-type="bibr" rid="B69">2012</xref>
) resource sharing framework suggesting that the cognitive operations might share brain mechanisms while the domains may have separate representations in long-term memory.</p>
</sec>
</sec>
<sec>
<title>Music and visuospatial perception: is there an association?</title>
<p>Because the perception of musical pitch may be spatial in nature (Rusconi et al.,
<xref ref-type="bibr" rid="B89">2006</xref>
), the possibility of an association between music and visuospatial perception has been suggested. Previous research results considering this association are somewhat contradictory: congenital amusics may have lower than average abilities in visuospatial processing (Douglas and Bilkey,
<xref ref-type="bibr" rid="B21">2007</xref>
) but this effect has not been replicated (Tillmann et al.,
<xref ref-type="bibr" rid="B109">2010</xref>
; Williamson et al.,
<xref ref-type="bibr" rid="B117">2011</xref>
). In the present study the hypothesis was investigated by creating a visuospatial task that was analogous to the Scale subtest of the MBEA. In the regression analysis where the possible confounding variables were controlled for, the visuospatial test was only marginally significant predictor of the Scale subtest. Also, self-reported problems in visuospatial perception were not found to be significantly associated with the music perception scores. However, visuospatial test was a significant predictor of the music perception total score and Off-beat subtest score when pitch perception, short-term memory, age group, and music education were controlled for. It is possible that these associations may be at least partly explained by some confounding factor (e.g., attention), which we were not able to control for here. Because the expected association between the analogous tests of music and visuospatial perception was not significant, the present results remain somewhat inconclusive concerning the link between music perception and visuospatial processing, and more research is still needed.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusions</title>
<p>The main result of this study is the observed strong association between music and word stress perception in healthy subjects. Our findings strengthen the hypothesis that music and speech perception are linked and show that this link does not exist only via the perception of pitch, as found in former studies, but also via the rhythmic aspect. The study also replicated former findings of the independence of rhythm and melody perception. Taken together, our results raise an interesting possibility that the perception of rhythm and melody could be more clearly separable than music and speech, at least in some cases. However, our data is not able to provide any definitive answers and it is clear that more work is still needed.</p>
<p>In future, more research is still needed to better understand the association between the processing of rhythm or meter in speech and music. The perception of rhythm comprises of many aspects and it is important to find out exactly which characteristics of the rhythm of speech and music are processed similarly. One possibility is to study communication forms in which rhythm is a central variable for both speech and music—rap music might be one example. This line of research would be interesting and fruitful in delineating the commonalities and cross-boundaries between music and speech in the temporal and spectral domains.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We would like thank Andrew Faulkner for his helpful advice in adapting the word stress test into Finnish language, Miika Leminen for his help in carrying out the Finnish version of the word stress test, Valtteri Wikström for programming the visuospatial perception test, Tommi Makkonen for programming the pitch perception test, Jari Lipsanen for statistical advice, and Mari Tervaniemi for valuable help in designing this study and discussing the results.</p>
</ack>
<sec sec-type="supplementary material" id="s6">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at:
<ext-link ext-link-type="uri" xlink:href="http://www.frontiersin.org/Auditory_Cognitive_Neuroscience/10.3389/fpsyg.2013.00566/abstract">http://www.frontiersin.org/Auditory_Cognitive_Neuroscience/10.3389/fpsyg.2013.00566/abstract</ext-link>
</p>
<supplementary-material content-type="local-data" id="SM1">
<media xlink:href="Movie1.WMV" mimetype="video" mime-subtype="quicktime">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM2">
<media xlink:href="Movie2.WMV" mimetype="video" mime-subtype="quicktime">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data">
<media xlink:href="DataSheet1.DOCX" mimetype="application" mime-subtype="msword">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM3">
<media xlink:href="Audio1.WAV" mimetype="audio" mime-subtype="basic">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM4">
<media xlink:href="Audio2.WAV" mimetype="audio" mime-subtype="basic">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM5">
<media xlink:href="Audio3.WAV" mimetype="audio" mime-subtype="basic">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM6">
<media xlink:href="Audio4.WAV" mimetype="audio" mime-subtype="basic">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM7">
<media xlink:href="Audio5.WAV" mimetype="audio" mime-subtype="basic">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abrams</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Bhatara</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ryali</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Balaban</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Levitin</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Menon</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Decoding temporal structure in music and speech relies on shared brain resources but elicits different fine-scale spatial patterns</article-title>
.
<source>Cereb. Cortex</source>
<volume>21</volume>
,
<fpage>1507</fpage>
<lpage>1518</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhq198</pub-id>
<pub-id pub-id-type="pmid">21071617</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alcock</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Passingham</surname>
<given-names>R. E.</given-names>
</name>
<name>
<surname>Watkins</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Vargha-Khadem</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Pitch and timing abilities in inherited speech and language impairment</article-title>
.
<source>Brain Lang</source>
.
<volume>75</volume>
,
<fpage>34</fpage>
<lpage>46</lpage>
<pub-id pub-id-type="doi">10.1006/brln.2000.2323</pub-id>
<pub-id pub-id-type="pmid">11023637</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Basso</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Capitani</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Spared musical abilities in a conductor with global aphasia and ideomotor apraxia</article-title>
.
<source>J. Neurol. Neurosurg. Psychiatry</source>
<volume>48</volume>
,
<fpage>407</fpage>
<lpage>412</lpage>
<pub-id pub-id-type="doi">10.1136/jnnp.48.5.407</pub-id>
<pub-id pub-id-type="pmid">2582094</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Chobert</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Marie</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Transfer of training between music and speech: common processing, attention, and memory</article-title>
.
<source>Front. Psychol</source>
.
<volume>2</volume>
:
<issue>94</issue>
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00094</pub-id>
<pub-id pub-id-type="pmid">21738519</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Schön</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>“What remains of modularity?,”</article-title>
in
<source>Language and Music as Cognitive Systems</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Rebuschat</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Rohrmeier</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hawkins</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cross</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford, UK</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>283</fpage>
<lpage>291</lpage>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bidelman</surname>
<given-names>G. M.</given-names>
</name>
<name>
<surname>Gandour</surname>
<given-names>J. T.</given-names>
</name>
<name>
<surname>Krishnan</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Cross-domain effects of music and language experience on the representation of pitch in the human auditory brainstem</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>23</volume>
,
<fpage>425</fpage>
<lpage>434</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2009.21362</pub-id>
<pub-id pub-id-type="pmid">19925180</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boersma</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Praat, a system for doing phonetics by computer</article-title>
.
<source>Glot Int</source>
.
<volume>5</volume>
,
<fpage>341</fpage>
<lpage>345</lpage>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brainard</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>The psychophysics toolbox</article-title>
.
<source>Spat. Vis</source>
.
<volume>10</volume>
,
<fpage>433</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="doi">10.1163/156856897X00357</pub-id>
<pub-id pub-id-type="pmid">9176952</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brandt</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gerbian</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Slevc</surname>
<given-names>L. R.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Music and early language acquisition</article-title>
.
<source>Front. Psychol</source>
.
<volume>3</volume>
:
<issue>327</issue>
<pub-id pub-id-type="doi">10.3389/fpsyg.2012.00327</pub-id>
<pub-id pub-id-type="pmid">22973254</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brochard</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Dufour</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Després</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Effect of musical expertise on visuospatial abilities: evidence from reaction times and mental imagery</article-title>
.
<source>Brain Cogn</source>
.
<volume>54</volume>
,
<fpage>103</fpage>
<lpage>109</lpage>
<pub-id pub-id-type="doi">10.1016/S0278-2626(03)00264-1</pub-id>
<pub-id pub-id-type="pmid">14980450</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cason</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Schön</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Rhythmic priming enhances the phonological processing of speech</article-title>
.
<source>Neuropsychologia</source>
<volume>50</volume>
,
<fpage>2652</fpage>
<lpage>2658</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2012.07.018</pub-id>
<pub-id pub-id-type="pmid">22828660</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chartrand</surname>
<given-names>J.-P.</given-names>
</name>
<name>
<surname>Belin</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Superior voice timbre processing in musicians</article-title>
.
<source>Neurosci. Lett</source>
.
<volume>405</volume>
,
<fpage>164</fpage>
<lpage>167</lpage>
<pub-id pub-id-type="doi">10.1016/j.neulet.2006.06.053</pub-id>
<pub-id pub-id-type="pmid">16860471</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Penhune</surname>
<given-names>V. B.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Listening to musical rhythms recruits motorregions of the brain</article-title>
.
<source>Cereb. Cortex</source>
<volume>18</volume>
,
<fpage>2844</fpage>
<lpage>2854</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhn042</pub-id>
<pub-id pub-id-type="pmid">18388350</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chobert</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>François</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Velay</surname>
<given-names>J.-L.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Twelve months of active musical training in 8-to 10-year-old children enhances the preattentive processing of syllabic duration and voice onset time</article-title>
.
<source>Cereb. Cortex</source>
. [Epub ahead of print].
<pub-id pub-id-type="doi">10.1093/cercor/bhs377</pub-id>
<pub-id pub-id-type="pmid">23236208</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Corriveau</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Pasquini</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Goswami</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Basic auditory processing skills and specific language impairment: a new look at an old hypothesis</article-title>
.
<source>J. Speech Lang. Hear. Res</source>
.
<volume>50</volume>
,
<fpage>647</fpage>
<lpage>666</lpage>
<pub-id pub-id-type="doi">10.1044/1092-4388(2007/046)</pub-id>
<pub-id pub-id-type="pmid">17538107</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Corriveau</surname>
<given-names>K. H.</given-names>
</name>
<name>
<surname>Goswami</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Rhythmic motor entrainment in children with speech and language impairments: tapping to the beat</article-title>
.
<source>Cortex</source>
<volume>45</volume>
,
<fpage>119</fpage>
<lpage>130</lpage>
<pub-id pub-id-type="doi">10.1016/j.cortex.2007.09.008</pub-id>
<pub-id pub-id-type="pmid">19046744</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dalla Bella</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Music agnosias: selective impairments of music recognition after brain damage</article-title>
.
<source>J. New Music Res</source>
.
<volume>28</volume>
,
<fpage>209</fpage>
<lpage>216</lpage>
<pub-id pub-id-type="doi">10.1076/jnmr.28.3.209.3108</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dege</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Schwarzer</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>The effect of a music program on phonological awareness in preschoolers</article-title>
.
<source>Front. Psychol</source>
.
<volume>2</volume>
:
<issue>124</issue>
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00124</pub-id>
<pub-id pub-id-type="pmid">21734895</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Deutsch</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Henthorn</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Marvin</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Absolute pitch among American and Chinese conservatory students: prevalence differences, and evidence for a speech-related critical period</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>119</volume>
,
<fpage>719</fpage>
<lpage>722</lpage>
<pub-id pub-id-type="doi">10.1121/1.2151799</pub-id>
<pub-id pub-id-type="pmid">16521731</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Di Pietro</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Laganaro</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Leemann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Schnider</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Receptive amusia: temporal auditory processing deficit in a professional musician following a left temporo-parietal lesion</article-title>
.
<source>Neuropsychologia</source>
<volume>42</volume>
,
<fpage>868</fpage>
<lpage>877</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2003.12.004</pub-id>
<pub-id pub-id-type="pmid">14998702</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Douglas</surname>
<given-names>K. M.</given-names>
</name>
<name>
<surname>Bilkey</surname>
<given-names>D. K.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Amusia is associated with deficits in spatial processing</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>10</volume>
,
<fpage>915</fpage>
<lpage>921</lpage>
<pub-id pub-id-type="doi">10.1038/nn1925</pub-id>
<pub-id pub-id-type="pmid">17589505</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Field</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Hayes</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hess</surname>
<given-names>R. F.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>Contour integration by the human visual system: evidence for a local “association field”</article-title>
.
<source>Vision Res</source>
.
<volume>33</volume>
,
<fpage>173</fpage>
<lpage>193</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(93)90156-Q</pub-id>
<pub-id pub-id-type="pmid">8447091</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>François</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Chobert</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Schön</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Music training for the development of speech segmentation</article-title>
.
<source>Cereb. Cortex</source>
. [Epub ahead of print].
<pub-id pub-id-type="doi">10.1093/cercor/bhs180</pub-id>
<pub-id pub-id-type="pmid">22784606</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gordon</surname>
<given-names>R. L.</given-names>
</name>
<name>
<surname>Magne</surname>
<given-names>C. L.</given-names>
</name>
<name>
<surname>Large</surname>
<given-names>E. W.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>EEG correlates of song prosody: a new look at the relationship between linguistic and musical rhythm</article-title>
.
<source>Front. Psychol</source>
.
<volume>2</volume>
:
<issue>352</issue>
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00352</pub-id>
<pub-id pub-id-type="pmid">22144972</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Goswami</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>“Language, music, and children's brains: a rhythmic timing perspective on language and music as cognitive systems,”</article-title>
in
<source>Language and Music as Cognitive Systems</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Rebuschat</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Rohrmeier</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hawkins</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cross</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford, UK</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>292</fpage>
<lpage>301</lpage>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goswami</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Gerson</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Astruc</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Amplitude envelope perception, phonology and prosodic sensitivity in children with developmental dyslexia</article-title>
.
<source>Read. Writ</source>
.
<volume>23</volume>
,
<fpage>995</fpage>
<lpage>1019</lpage>
<pub-id pub-id-type="doi">10.1007/s11145-009-9186-6</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grahn</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Brett</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Rhythm and beat perception in motor areas of the brain</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>19</volume>
,
<fpage>893</fpage>
<lpage>906</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2007.19.5.893</pub-id>
<pub-id pub-id-type="pmid">17488212</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
<name>
<surname>Rees</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Witton</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Cross</surname>
<given-names>P. M.</given-names>
</name>
<name>
<surname>Shakir</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Green</surname>
<given-names>G. G.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Spatial and temporal auditory processing deficits following right hemisphere infarction: a psychophysical study</article-title>
.
<source>Brain</source>
<volume>120</volume>
,
<fpage>785</fpage>
<lpage>794</lpage>
<pub-id pub-id-type="doi">10.1093/brain/120.5.785</pub-id>
<pub-id pub-id-type="pmid">9183249</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Houston</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Santelmann</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Jusczyk</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>English-learning infants' segmentation of trisyllabic words from fluent speech</article-title>
.
<source>Lang. Cogn. Proc</source>
.
<volume>19</volume>
,
<fpage>97</fpage>
<lpage>136</lpage>
<pub-id pub-id-type="doi">10.1080/01690960344000143</pub-id>
<pub-id pub-id-type="pmid">22088408</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huss</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Verney</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Fosker</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Mead</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Goswami</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Music, rhythm, rise time perception and developmental dyslexia: perception of musical meter predicts reading and phonology</article-title>
.
<source>Cortex</source>
<volume>47</volume>
,
<fpage>674</fpage>
<lpage>689</lpage>
<pub-id pub-id-type="doi">10.1016/j.cortex.2010.07.010</pub-id>
<pub-id pub-id-type="pmid">20843509</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hyde</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Brains that are out of tune but in time</article-title>
.
<source>Psychol. Sci</source>
.
<volume>15</volume>
,
<fpage>356</fpage>
<lpage>360</lpage>
<pub-id pub-id-type="doi">10.1111/j.0956-7976.2004.00683.x</pub-id>
<pub-id pub-id-type="pmid">15102148</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Hamm</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Lim</surname>
<given-names>V. K.</given-names>
</name>
<name>
<surname>Kirk</surname>
<given-names>I. J.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Processing melodic contour and speech intonation in congenital amusics with Mandarin Chinese</article-title>
.
<source>Neuropsychologia</source>
<volume>48</volume>
,
<fpage>2630</fpage>
<lpage>2639</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2010.05.009</pub-id>
<pub-id pub-id-type="pmid">20471406</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jones</surname>
<given-names>L. J.</given-names>
</name>
<name>
<surname>Lucker</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zalewski</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Brewer</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Drayna</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Phonological processing in adults with defcits in musical pitch recognition</article-title>
.
<source>J. Commun. Disord</source>
.
<volume>42</volume>
,
<fpage>226</fpage>
<lpage>234</lpage>
<pub-id pub-id-type="doi">10.1016/j.jcomdis.2009.01.001</pub-id>
<pub-id pub-id-type="pmid">19233383</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jones</surname>
<given-names>M. R.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>Time, our lost dimension: toward a new theory of perception, attention, and memory</article-title>
.
<source>Psychol. Rev</source>
.
<volume>83</volume>
,
<fpage>323</fpage>
<lpage>335</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.83.5.323</pub-id>
<pub-id pub-id-type="pmid">794904</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jusczyk</surname>
<given-names>P. W.</given-names>
</name>
<name>
<surname>Houston</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Newsome</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>The beginnings of word segmentation in english-learning infants</article-title>
.
<source>Cogn. Psychol</source>
.
<volume>39</volume>
,
<fpage>159</fpage>
<lpage>207</lpage>
<pub-id pub-id-type="doi">10.1006/cogp.1999.0716</pub-id>
<pub-id pub-id-type="pmid">10631011</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Juslin</surname>
<given-names>P. N.</given-names>
</name>
<name>
<surname>Laukka</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Communication of emotions in vocal expression and music performance: different channels, same code?</article-title>
<source>Psychol. Bull</source>
.
<volume>129</volume>
,
<fpage>770</fpage>
<lpage>814</lpage>
<pub-id pub-id-type="doi">10.1037/0033-2909.129.5.770</pub-id>
<pub-id pub-id-type="pmid">12956543</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knösche</surname>
<given-names>T. R.</given-names>
</name>
<name>
<surname>Neuhaus</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Haueisen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Alter</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Maess</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Witte</surname>
<given-names>O. W.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2005</year>
).
<article-title>Perception of phrase structure in music</article-title>
.
<source>Hum. Brain Mapp</source>
.
<volume>24</volume>
,
<fpage>259</fpage>
<lpage>273</lpage>
<pub-id pub-id-type="doi">10.1002/hbm.20088</pub-id>
<pub-id pub-id-type="pmid">15678484</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kochanski</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Grabe</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Coleman</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Rosner</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Loudness predicts prominence: fundamental frequency lends little</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>118</volume>
,
<fpage>1038</fpage>
<lpage>1054</lpage>
<pub-id pub-id-type="doi">10.1121/1.1923349</pub-id>
<pub-id pub-id-type="pmid">16158659</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gunter</surname>
<given-names>T. C.</given-names>
</name>
<name>
<surname>van Cramon</surname>
<given-names>D. Y.</given-names>
</name>
<name>
<surname>Zysset</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lohmann</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Bach speaks: a cortical “language-network” serves the processing of music</article-title>
.
<source>Neuroimage</source>
<volume>17</volume>
,
<fpage>956</fpage>
<lpage>966</lpage>
<pub-id pub-id-type="doi">10.1006/nimg.2002.1154</pub-id>
<pub-id pub-id-type="pmid">12377169</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kasper</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Sammler</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Schulze</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Gunter</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Music, language and meaning: brain signatures of semantic processing</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>7</volume>
,
<fpage>302</fpage>
<lpage>307</lpage>
<pub-id pub-id-type="doi">10.1038/nn1197</pub-id>
<pub-id pub-id-type="pmid">14983184</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kotilahti</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Nissilä</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Näsi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Lipiäinen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Noponen</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Meriläinen</surname>
<given-names>P.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2010</year>
).
<article-title>Hemodynamic responses to speech and music in newborn infants</article-title>
.
<source>Hum. Brain Mapp</source>
.
<volume>31</volume>
,
<fpage>595</fpage>
<lpage>603</lpage>
<pub-id pub-id-type="pmid">19790172</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kotz</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Schwartze</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Cortical speech processing unplugged: a timely subcortico-cortical framework</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>14</volume>
,
<fpage>392</fpage>
<lpage>399</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2010.06.005</pub-id>
<pub-id pub-id-type="pmid">20655802</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kraus</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Chandrasekaran</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Music training for the development of auditory skills</article-title>
.
<source>Nat. Rev. Neurosci</source>
.
<volume>11</volume>
,
<fpage>599</fpage>
<lpage>605</lpage>
<pub-id pub-id-type="doi">10.1038/nrn2882</pub-id>
<pub-id pub-id-type="pmid">20648064</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kubanek</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Brunner</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Gunduz</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Schalk</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>The tracking of speech envelope in the human cortex</article-title>
.
<source>PLoS ONE</source>
<volume>8</volume>
:
<fpage>e53398</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0053398</pub-id>
<pub-id pub-id-type="pmid">23408924</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lai</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>Fisher</surname>
<given-names>S. E.</given-names>
</name>
<name>
<surname>Hurst</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Vargha-Khadem</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Monaco</surname>
<given-names>A. P.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>A forkhead-domain gene is mutated in a severe speech and language disorder</article-title>
.
<source>Nature</source>
<volume>413</volume>
,
<fpage>519</fpage>
<lpage>523</lpage>
<pub-id pub-id-type="doi">10.1038/35097076</pub-id>
<pub-id pub-id-type="pmid">11586359</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Large</surname>
<given-names>E. W.</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>M. R.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>The dynamics of attending: how people track time-varying events</article-title>
.
<source>Psychol. Rev</source>
.
<volume>106</volume>
,
<fpage>119</fpage>
<lpage>159</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.106.1.119</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liberman</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Mattingly</surname>
<given-names>I. G.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>The motor theory of speech perception revised</article-title>
.
<source>Cognition</source>
<volume>21</volume>
,
<fpage>1</fpage>
<lpage>36</lpage>
<pub-id pub-id-type="doi">10.1016/0010-0277(85)90021-6</pub-id>
<pub-id pub-id-type="pmid">4075760</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lieberman</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1960</year>
).
<article-title>Some acoustic correlates of word stress in American English</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>32</volume>
,
<fpage>451</fpage>
<lpage>454</lpage>
<pub-id pub-id-type="doi">10.1121/1.1908095</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lima</surname>
<given-names>C. F.</given-names>
</name>
<name>
<surname>Castro</surname>
<given-names>S. L.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Speaking to the trained ear: musical expertise enhances the recognition of emotions in speech prosody</article-title>
.
<source>Emotion</source>
<volume>11</volume>
,
<fpage>1021</fpage>
<lpage>1031</lpage>
<pub-id pub-id-type="doi">10.1037/a0024521</pub-id>
<pub-id pub-id-type="pmid">21942696</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Fourcin</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Intonation processing in congenital amusia: discrimination, identification and imitation</article-title>
.
<source>Brain</source>
<volume>133</volume>
,
<fpage>1682</fpage>
<lpage>1693</lpage>
<pub-id pub-id-type="doi">10.1093/brain/awq089</pub-id>
<pub-id pub-id-type="pmid">20418275</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Luo</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex</article-title>
.
<source>Front. Psychol</source>
.
<volume>3</volume>
:
<issue>170</issue>
<pub-id pub-id-type="doi">10.3389/fpsyg.2012.00170</pub-id>
<pub-id pub-id-type="pmid">22666214</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Magne</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Schön</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Musician children detect pitch violations in both music and language better than nonmusician children: behavioral and electrophysiological approaches</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>18</volume>
,
<fpage>199</fpage>
<lpage>211</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2006.18.2.199</pub-id>
<pub-id pub-id-type="pmid">16494681</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marie</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Delogu</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Lampis</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Belardinelli</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2011a</year>
).
<article-title>Influence of musical expertise on segmental and tonal processing in Mandarin Chinese</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>23</volume>
,
<fpage>2701</fpage>
<lpage>2715</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2010.21585</pub-id>
<pub-id pub-id-type="pmid">20946053</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marie</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Magne</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2011b</year>
).
<article-title>Musicians and the metric structure of words</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>23</volume>
,
<fpage>294</fpage>
<lpage>305</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2010.21413</pub-id>
<pub-id pub-id-type="pmid">20044890</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marie</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Kujala</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Musical and linguistic expertise influence pre-attentive and attentive processing of non-speech sounds</article-title>
.
<source>Cortex</source>
<volume>48</volume>
,
<fpage>447</fpage>
<lpage>457</lpage>
<pub-id pub-id-type="doi">10.1016/j.cortex.2010.11.006</pub-id>
<pub-id pub-id-type="pmid">21189226</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marques</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Moreno</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Castro</surname>
<given-names>S. L.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Musicians detect pitch violation in a foreign language better than nonmusicians: behavioral and electrophysiological evidence</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>19</volume>
,
<fpage>1453</fpage>
<lpage>1463</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2007.19.9.1453</pub-id>
<pub-id pub-id-type="pmid">17714007</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mendez</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Generalized auditory agnosia with spared music recognition in a left-hander. analysis of a case with a right temporal stroke</article-title>
.
<source>Cortex</source>
<volume>37</volume>
,
<fpage>139</fpage>
<lpage>150</lpage>
<pub-id pub-id-type="doi">10.1016/S0010-9452(08)70563-X</pub-id>
<pub-id pub-id-type="pmid">11292159</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Milovanov</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Huotilainen</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Välimäki</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Esquef</surname>
<given-names>P. A.</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Musical aptitude and second language pronunciation skills in school-aged children: neural and behavioral evidence</article-title>
.
<source>Brain Res</source>
.
<volume>1194</volume>
,
<fpage>81</fpage>
<lpage>89</lpage>
<pub-id pub-id-type="doi">10.1016/j.brainres.2007.11.042</pub-id>
<pub-id pub-id-type="pmid">18182165</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Mithen</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<source>The Singing Neanderthals: The Origins of Music, Language, Mind and Body</source>
.
<publisher-loc>Cambridge</publisher-loc>
:
<publisher-name>Harvard University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moreno</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Marques</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Santos</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Santos</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Castro</surname>
<given-names>S. L.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Musical training influences linguistic abilities in 8-year-old children: more evidence for brain plasticity</article-title>
.
<source>Cereb. Cortex</source>
<volume>19</volume>
,
<fpage>712</fpage>
<lpage>723</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhn120</pub-id>
<pub-id pub-id-type="pmid">18832336</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morton</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jassem</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>1965</year>
).
<article-title>Acoustic correlates of stress</article-title>
.
<source>Lang. Speech</source>
<volume>8</volume>
,
<fpage>159</fpage>
<lpage>181</lpage>
<pub-id pub-id-type="pmid">5832574</pub-id>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Musacchia</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Sams</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Skoe</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kraus</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Musicians have enhanced subcortical auditory and audiovisual processing of speech and music</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>104</volume>
,
<fpage>15894</fpage>
<lpage>15898</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.0701498104</pub-id>
<pub-id pub-id-type="pmid">17898180</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Musacchia</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Strait</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Kraus</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Relationships between behavior, brainstem and cortical encoding of seen and heard speech in musicians and nonmusicians</article-title>
.
<source>Hear. Res</source>
.
<volume>241</volume>
,
<fpage>34</fpage>
<lpage>42</lpage>
<pub-id pub-id-type="doi">10.1016/j.heares.2008.04.013</pub-id>
<pub-id pub-id-type="pmid">18562137</pub-id>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nan</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Congenital amusia in speakers of a tone language: association with lexical tone agnosia</article-title>
.
<source>Brain</source>
<volume>133</volume>
,
<fpage>2635</fpage>
<lpage>2642</lpage>
<pub-id pub-id-type="doi">10.1093/brain/awq178</pub-id>
<pub-id pub-id-type="pmid">20685803</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Nooteboom</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>“The prosody of speech: melody and rhythm,”</article-title>
in
<source>The Handbook of Phonetic Sciences</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Hardcastle</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Laver</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford, UK</publisher-loc>
:
<publisher-name>Blackwell</publisher-name>
),
<fpage>640</fpage>
<lpage>673</lpage>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>O'Halpin</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<source>The Perception and Production of Stress and Intonation by Children with Cochlear Implants</source>
. Dissertation, University College London. Available online at:
<ext-link ext-link-type="uri" xlink:href="http://eprints.ucl.ac.uk/20406/">http://eprints.ucl.ac.uk/20406/</ext-link>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>The relationship of music to the melody of speech and to syntactic processing disorders in aphasia</article-title>
.
<source>Ann. N.Y Acad. Sci</source>
.
<volume>1060</volume>
,
<fpage>59</fpage>
<lpage>70</lpage>
<pub-id pub-id-type="doi">10.1196/annals.1360.005</pub-id>
<pub-id pub-id-type="pmid">16597751</pub-id>
</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<source>Music, Language, and The Brain</source>
.
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>“Language, music, and the brain: a resource-sharing framework,”</article-title>
in
<source>Language and Music as Cognitive Systems</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Rebuschat</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Rohrmeier</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hawkins</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cross</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford, UK</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>204</fpage>
<lpage>223</lpage>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Daniele</surname>
<given-names>J. R.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>An empirical comparison of rhythm in language and music</article-title>
.
<source>Cognition</source>
<volume>87</volume>
,
<fpage>B35</fpage>
<lpage>B45</lpage>
<pub-id pub-id-type="doi">10.1016/S0010-0277(02)00187-7</pub-id>
<pub-id pub-id-type="pmid">12499110</pub-id>
</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Gibson</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Ratner</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Holcomb</surname>
<given-names>P. J.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Processing syntactic relations in language and music: an event-related potential study</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>10</volume>
,
<fpage>717</fpage>
<lpage>733</lpage>
<pub-id pub-id-type="doi">10.1162/089892998563121</pub-id>
<pub-id pub-id-type="pmid">9831740</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Iversen</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Wassenaar</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hagoort</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2008a</year>
).
<article-title>Musical syntactic processing in agrammatic Broca's aphasia</article-title>
.
<source>Aphasiology</source>
<volume>22</volume>
,
<fpage>776</fpage>
<lpage>789</lpage>
<pub-id pub-id-type="doi">10.1080/02687030701803804</pub-id>
</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Foxton</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lochy</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2008b</year>
).
<article-title>Speech intonation perception deficits in musical tone deafness (congenital amusia)</article-title>
.
<source>Music Percept</source>
.
<volume>25</volume>
,
<fpage>357</fpage>
<lpage>368</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2008.25.4.357</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Foxton</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Musically tone-deaf individuals have difficulty discriminating intonation contours extracted from speech</article-title>
.
<source>Brain Cogn</source>
.
<volume>59</volume>
,
<fpage>310</fpage>
<lpage>313</lpage>
<pub-id pub-id-type="doi">10.1016/j.bandc.2004.10.003</pub-id>
<pub-id pub-id-type="pmid">16337871</pub-id>
</mixed-citation>
</ref>
<ref id="B75">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patston</surname>
<given-names>L. L. M.</given-names>
</name>
<name>
<surname>Corballis</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Hogg</surname>
<given-names>S. L.</given-names>
</name>
<name>
<surname>Tippett</surname>
<given-names>L. J.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>The neglect of musicians: line bisection reveals an opposite bias</article-title>
.
<source>Psychol. Sci</source>
.
<volume>17</volume>
,
<fpage>1029</fpage>
<lpage>1031</lpage>
<pub-id pub-id-type="doi">10.1111/j.1467-9280.2006.01823.x</pub-id>
<pub-id pub-id-type="pmid">17201783</pub-id>
</mixed-citation>
</ref>
<ref id="B76">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>1990</year>
).
<article-title>Processing of local and global musical information by unilateral brain-damaged patients</article-title>
.
<source>Brain</source>
<volume>113</volume>
,
<fpage>1185</fpage>
<lpage>1205</lpage>
<pub-id pub-id-type="doi">10.1093/brain/113.4.1185</pub-id>
<pub-id pub-id-type="pmid">2397389</pub-id>
</mixed-citation>
</ref>
<ref id="B77">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Champod</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hyde</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Varieties of musical disorders: the montreal battery of evaluation of amusia</article-title>
.
<source>Ann. N.Y. Acad. Sci</source>
.
<volume>999</volume>
,
<fpage>58</fpage>
<lpage>75</lpage>
<pub-id pub-id-type="doi">10.1196/annals.1284.006</pub-id>
<pub-id pub-id-type="pmid">14681118</pub-id>
</mixed-citation>
</ref>
<ref id="B78">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Coltheart</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Modularity of music processing</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>6</volume>
,
<fpage>688</fpage>
<lpage>691</lpage>
<pub-id pub-id-type="doi">10.1038/nn1083</pub-id>
<pub-id pub-id-type="pmid">12830160</pub-id>
</mixed-citation>
</ref>
<ref id="B79">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Gosselin</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Cuddy</surname>
<given-names>L. L.</given-names>
</name>
<name>
<surname>Gagnon</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Trimmer</surname>
<given-names>C. G.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2008</year>
).
<article-title>On-line identification of congenital amusia</article-title>
.
<source>Music Percept</source>
.
<volume>25</volume>
,
<fpage>331</fpage>
<lpage>343</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2008.25.4.331</pub-id>
<pub-id pub-id-type="pmid">22509257</pub-id>
</mixed-citation>
</ref>
<ref id="B81">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Kolinsky</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>Boundaries of separability between melody and rhythm in music discrimination: a neuropsychological perspective</article-title>
.
<source>Q. J. Exp. Psychol</source>
.
<volume>46A</volume>
,
<fpage>301</fpage>
<lpage>325</lpage>
<pub-id pub-id-type="pmid">8316639</pub-id>
</mixed-citation>
</ref>
<ref id="B82">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Brain organization for music processing</article-title>
.
<source>Annu. Rev. Psychol</source>
.
<volume>56</volume>
,
<fpage>89</fpage>
<lpage>114</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.psych.56.091103.070225</pub-id>
<pub-id pub-id-type="pmid">15709930</pub-id>
</mixed-citation>
</ref>
<ref id="B83">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pfordresher</surname>
<given-names>P. Q.</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Enhanced production and perception of musical pitch in tone language speakers</article-title>
.
<source>Attent. Percept. Psychophys</source>
.
<volume>71</volume>
,
<fpage>1385</fpage>
<lpage>1398</lpage>
<pub-id pub-id-type="doi">10.3758/APP.71.6.1385</pub-id>
<pub-id pub-id-type="pmid">19633353</pub-id>
</mixed-citation>
</ref>
<ref id="B84">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Phillips-Silver</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Toiviainen</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Gosselin</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Piché</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Nozaradana</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Palmera</surname>
<given-names>C.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2011</year>
).
<article-title>Born to dance but beat deaf: a new form of congenital amusia</article-title>
.
<source>Neuropsychologia</source>
<volume>49</volume>
,
<fpage>961</fpage>
<lpage>969</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.02.002</pub-id>
<pub-id pub-id-type="pmid">21316375</pub-id>
</mixed-citation>
</ref>
<ref id="B85">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Pinker</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<source>How the Mind Works</source>
.
<publisher-loc>London</publisher-loc>
:
<publisher-name>Allen Lane</publisher-name>
</mixed-citation>
</ref>
<ref id="B86">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Racette</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bard</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Making non-fluent aphasics speak: sing along!</article-title>
<source>Brain</source>
<volume>129</volume>
,
<fpage>2571</fpage>
<lpage>2584</lpage>
<pub-id pub-id-type="doi">10.1093/brain/awl250</pub-id>
<pub-id pub-id-type="pmid">16959816</pub-id>
</mixed-citation>
</ref>
<ref id="B87">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rauschecker</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Scott</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>12</volume>
,
<fpage>718</fpage>
<lpage>724</lpage>
<pub-id pub-id-type="doi">10.1038/nn.2331</pub-id>
<pub-id pub-id-type="pmid">19471271</pub-id>
</mixed-citation>
</ref>
<ref id="B88">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rogalsky</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Rong</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Saberi</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Functional anatomy of language and music perception: temporal and structural factors investigated using functional magnetic resonance imaging</article-title>
.
<source>J. Neurosci</source>
.
<volume>31</volume>
,
<fpage>3843</fpage>
<lpage>3852</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4515-10.2011</pub-id>
<pub-id pub-id-type="pmid">21389239</pub-id>
</mixed-citation>
</ref>
<ref id="B89">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rusconi</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kwan</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Giordano</surname>
<given-names>B. L.</given-names>
</name>
<name>
<surname>Umilta</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Butterworth</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Spatial representation of pitch height: the SMARC effect</article-title>
.
<source>Cognition</source>
<volume>99</volume>
,
<fpage>113</fpage>
<lpage>129</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2005.01.004</pub-id>
<pub-id pub-id-type="pmid">15925355</pub-id>
</mixed-citation>
</ref>
<ref id="B90">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Saarikallio</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Development and validation of the brief music in mood regulation scale (B-MMR)</article-title>
.
<source>Music Percept</source>
.
<volume>30</volume>
,
<fpage>97</fpage>
<lpage>105</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2012.30.1.97</pub-id>
</mixed-citation>
</ref>
<ref id="B91">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schellenberg</surname>
<given-names>E. G.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Long-term positive associations between music lessons and IQ</article-title>
.
<source>J. Educ. Psychol</source>
.
<volume>98</volume>
,
<fpage>457</fpage>
<lpage>468</lpage>
<pub-id pub-id-type="doi">10.1037/0022-0663.98.2.457</pub-id>
</mixed-citation>
</ref>
<ref id="B92">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schlaug</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Norton</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Marchina</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zipse</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Wan</surname>
<given-names>C. Y.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>From singing to speaking: facilitating recovery from nonfluent aphasia</article-title>
.
<source>Future Neurol</source>
.
<volume>5</volume>
,
<fpage>657</fpage>
<lpage>665</lpage>
<pub-id pub-id-type="doi">10.2217/fnl.10.44</pub-id>
<pub-id pub-id-type="pmid">21088709</pub-id>
</mixed-citation>
</ref>
<ref id="B93">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schön</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Gordon</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Campagne</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Magne</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Astésano</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Anton</surname>
<given-names>J. L.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2010</year>
).
<article-title>Similar cerebral networks in language, music and song perception</article-title>
.
<source>Neuroimage</source>
<volume>51</volume>
,
<fpage>450</fpage>
<lpage>461</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.02.023</pub-id>
<pub-id pub-id-type="pmid">20156575</pub-id>
</mixed-citation>
</ref>
<ref id="B94">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schön</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Magne</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The music of speech: music training facilitates pitch processing in both music and language</article-title>
.
<source>Psychophysiology</source>
<volume>41</volume>
,
<fpage>341</fpage>
<lpage>349</lpage>
<pub-id pub-id-type="doi">10.1111/1469-8986.00172.x</pub-id>
<pub-id pub-id-type="pmid">15102118</pub-id>
</mixed-citation>
</ref>
<ref id="B95">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shahin</surname>
<given-names>A. J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Neuropsysiological influence of musical training on speech perception</article-title>
.
<source>Front. Psychol</source>
.
<volume>2</volume>
:
<issue>126</issue>
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00126</pub-id>
<pub-id pub-id-type="pmid">21716639</pub-id>
</mixed-citation>
</ref>
<ref id="B96">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slevc</surname>
<given-names>L. R.</given-names>
</name>
<name>
<surname>Miyake</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Individual differences in second-language proficiency: does musical ability matter?</article-title>
<source>Psychol. Sci</source>
.
<volume>17</volume>
,
<fpage>675</fpage>
<lpage>681</lpage>
<pub-id pub-id-type="doi">10.1111/j.1467-9280.2006.01765.x</pub-id>
<pub-id pub-id-type="pmid">16913949</pub-id>
</mixed-citation>
</ref>
<ref id="B97">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spinelli</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Grimault</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Meunier</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Welby</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>An intonational cue to word segmentation in phonemically identical sequences</article-title>
.
<source>Atten. Percept. Psychophys</source>
.
<volume>72</volume>
,
<fpage>775</fpage>
<lpage>787</lpage>
<pub-id pub-id-type="doi">10.3758/APP.72.3.775</pub-id>
<pub-id pub-id-type="pmid">20348582</pub-id>
</mixed-citation>
</ref>
<ref id="B98">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stahl</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Henseler</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Geyer</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Rhythm in disguise: why singing may not hold the key to recovery from aphasia</article-title>
.
<source>Brain</source>
<volume>134</volume>
,
<fpage>3083</fpage>
<lpage>3093</lpage>
<pub-id pub-id-type="doi">10.1093/brain/awr240</pub-id>
<pub-id pub-id-type="pmid">21948939</pub-id>
</mixed-citation>
</ref>
<ref id="B99">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Steinhauer</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Alter</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>A. D.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Brain potentials indicate immediate use of prosodic cues in natural speech processing</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>2</volume>
,
<fpage>191</fpage>
<lpage>196</lpage>
<pub-id pub-id-type="doi">10.1038/5757</pub-id>
<pub-id pub-id-type="pmid">10195205</pub-id>
</mixed-citation>
</ref>
<ref id="B100">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>von Kriegstein</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>T. D.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Music and the brain: disorders of musical listening</article-title>
.
<source>Brain</source>
<volume>129</volume>
,
<fpage>2533</fpage>
<lpage>2553</lpage>
<pub-id pub-id-type="doi">10.1093/brain/awl171</pub-id>
<pub-id pub-id-type="pmid">16845129</pub-id>
</mixed-citation>
</ref>
<ref id="B101">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Suomi</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Toivanen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ylitalo</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Durational and tonal correlates of accent in Finnish</article-title>
.
<source>J. Phon</source>
.
<volume>31</volume>
,
<fpage>113</fpage>
<lpage>138</lpage>
<pub-id pub-id-type="doi">10.1016/S0095-4470(02)00074-8</pub-id>
</mixed-citation>
</ref>
<ref id="B102">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hugdahl</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Lateralization of auditory-cortex functions</article-title>
.
<source>Brain Res. Rev</source>
.
<volume>43</volume>
,
<fpage>231</fpage>
<lpage>246</lpage>
<pub-id pub-id-type="doi">10.1016/j.brainresrev.2003.08.004</pub-id>
<pub-id pub-id-type="pmid">14629926</pub-id>
</mixed-citation>
</ref>
<ref id="B103">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Thompson</surname>
<given-names>W. F.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>“Exploring variants of amusia: tone deafness, rhythm impairment, and intonation insensitivity,”</article-title>
in
<source>Proceedings of the International Conference on Music Communication Science</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Schubert</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Buckley</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Eliott</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Koboroff</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Stevens</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<publisher-loc>Sydney, NSW</publisher-loc>
:
<publisher-name>HCSNet</publisher-name>
),
<fpage>159</fpage>
<lpage>163</lpage>
</mixed-citation>
</ref>
<ref id="B104">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thompson</surname>
<given-names>W. F.</given-names>
</name>
<name>
<surname>Marin</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Reduced sensitivity to emotional prosody in congenital amusia rekindles the musical protolanguage hypothesis</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>109</volume>
,
<fpage>19027</fpage>
<lpage>19032</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.1210344109</pub-id>
<pub-id pub-id-type="pmid">23112175</pub-id>
</mixed-citation>
</ref>
<ref id="B105">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thompson</surname>
<given-names>W. F.</given-names>
</name>
<name>
<surname>Schellenberg</surname>
<given-names>E. G.</given-names>
</name>
<name>
<surname>Husain</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Decoding speech prosody: do music lessons help?</article-title>
<source>Emotion</source>
<volume>4</volume>
,
<fpage>46</fpage>
<lpage>64</lpage>
<pub-id pub-id-type="doi">10.1037/1528-3542.4.1.46</pub-id>
<pub-id pub-id-type="pmid">15053726</pub-id>
</mixed-citation>
</ref>
<ref id="B106">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Burnham</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Nguyen</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Grimault</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Gosselin</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2011a</year>
).
<article-title>Congenital Amusia (or Tone-Deafness) interferes with pitch processing in tone languages</article-title>
.
<source>Front. Psychol</source>
.
<volume>2</volume>
:
<issue>120</issue>
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00120</pub-id>
<pub-id pub-id-type="pmid">21734894</pub-id>
</mixed-citation>
</ref>
<ref id="B107">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Rusconi</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Traube</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Butterworth</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Umiltà</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2011b</year>
).
<article-title>Fine-grained pitch processing of music and speech in congenital amusia</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>130</volume>
,
<fpage>4089</fpage>
<lpage>4096</lpage>
<pub-id pub-id-type="doi">10.1121/1.3658447</pub-id>
<pub-id pub-id-type="pmid">22225063</pub-id>
</mixed-citation>
</ref>
<ref id="B108">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Janata</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Bharucha</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Activation of the inferior frontal cortex in musical priming</article-title>
.
<source>Cogn. Brain Res</source>
.
<volume>16</volume>
,
<fpage>145</fpage>
<lpage>161</lpage>
<pub-id pub-id-type="doi">10.1016/S0926-6410(02)00245-8</pub-id>
<pub-id pub-id-type="pmid">12668222</pub-id>
</mixed-citation>
</ref>
<ref id="B109">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Jolicoeur</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Ishihara</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Gosselin</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Bertrand</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Rossetti</surname>
<given-names>Y.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2010</year>
).
<article-title>The amusic brain: lost in music, but not in space</article-title>
.
<source>PLoS ONE</source>
<volume>5</volume>
:
<fpage>e10173</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0010173</pub-id>
<pub-id pub-id-type="pmid">20422050</pub-id>
</mixed-citation>
</ref>
<ref id="B110">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Torppa</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Faulkner</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Vainio</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Järvikivi</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>“Acquisition of focus by normal hearing and cochlear implanted children: the role of musical experience”</article-title>
in
<source>Proceedings of the 5th international</source>
conference on speech prosody. (
<publisher-loc>Chicago, IL</publisher-loc>
).</mixed-citation>
</ref>
<ref id="B111">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Trehub</surname>
<given-names>S. E.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The developmental origins of musicality</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>6</volume>
,
<fpage>669</fpage>
<lpage>673</lpage>
<pub-id pub-id-type="doi">10.1038/nn1084</pub-id>
<pub-id pub-id-type="pmid">12830157</pub-id>
</mixed-citation>
</ref>
<ref id="B112">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vainio</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Järvikivi</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Focus in production: tonal shape, intensity and word order</article-title>
.
<source>J Acoust. Soc. Am</source>
.
<volume>121</volume>
,
<fpage>EL55</fpage>
<lpage>EL61</lpage>
<pub-id pub-id-type="doi">10.1121/1.2424264</pub-id>
<pub-id pub-id-type="pmid">17348546</pub-id>
</mixed-citation>
</ref>
<ref id="B113">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vogel</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Raimy</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>The acquisition of compound vs. phrasal stress: the role of prosodic constituents</article-title>
.
<source>J. Child Lang</source>
.
<volume>29</volume>
,
<fpage>225</fpage>
<lpage>250</lpage>
<pub-id pub-id-type="doi">10.1017/S0305000902005020</pub-id>
<pub-id pub-id-type="pmid">12109370</pub-id>
</mixed-citation>
</ref>
<ref id="B114">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tuomainen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>de Gelder</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>The roles of word stress and vowel harmony in speech segmentation</article-title>
.
<source>J. Mem. Lang</source>
.
<volume>38</volume>
,
<fpage>133</fpage>
<lpage>149</lpage>
<pub-id pub-id-type="doi">10.1006/jmla.1997.2548</pub-id>
</mixed-citation>
</ref>
<ref id="B115">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wechsler</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<source>Wechsler Adult Intelligence Scale, 3rd Edn</source>
.
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Psychological Corporation</publisher-name>
</mixed-citation>
</ref>
<ref id="B116">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wechsler</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<source>Wechsler adult intelligence scale, käsikirja, 3rd Edn</source>
.
<publisher-loc>Helsinki</publisher-loc>
:
<publisher-name>Psykologien Kustannus Oy</publisher-name>
</mixed-citation>
</ref>
<ref id="B117">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Williamson</surname>
<given-names>V. J.</given-names>
</name>
<name>
<surname>Cocchini</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>The relationship between pitch and space in congenital amusia</article-title>
.
<source>Brain Cogn</source>
.
<volume>76</volume>
,
<fpage>70</fpage>
<lpage>76</lpage>
<pub-id pub-id-type="doi">10.1016/j.bandc.2011.02.016</pub-id>
<pub-id pub-id-type="pmid">21440971</pub-id>
</mixed-citation>
</ref>
<ref id="B118">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wong</surname>
<given-names>P. C.</given-names>
</name>
<name>
<surname>Skoe</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Russo</surname>
<given-names>N. M.</given-names>
</name>
<name>
<surname>Dees</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kraus</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Musical experience shapes human brainstem encoding of linguistic pitch patterns</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>10</volume>
,
<fpage>420</fpage>
<lpage>422</lpage>
<pub-id pub-id-type="pmid">17351633</pub-id>
</mixed-citation>
</ref>
<ref id="B119">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Baum</surname>
<given-names>S. R.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Musical melody and speech intonation: singing a different tune</article-title>
.
<source>PLoS Biol</source>
.
<volume>10</volume>
:
<fpage>e1001372</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pbio.1001372</pub-id>
<pub-id pub-id-type="pmid">22859909</pub-id>
</mixed-citation>
</ref>
<ref id="B120">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Gandour</surname>
<given-names>J. T.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Neural specializations for speech and pitch: moving beyond the dichotomies</article-title>
.
<source>Philos. Trans. R. Soc. Lond. B Biol. Sci</source>
.
<volume>363</volume>
,
<fpage>1087</fpage>
<lpage>1104</lpage>
<pub-id pub-id-type="doi">10.1098/rstb.2007.2161</pub-id>
<pub-id pub-id-type="pmid">17890188</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Sarre/explor/MusicSarreV3/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000146  | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000146  | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Sarre
   |area=    MusicSarreV3
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     
   |texte=   
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Sun Jul 15 18:16:09 2018. Site generation: Tue Mar 5 19:21:25 2024