Serveur d'exploration sur la musique celtique

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Memory for melody and key in childhood

Identifieur interne : 000F54 ( Pmc/Corpus ); précédent : 000F53; suivant : 000F55

Memory for melody and key in childhood

Auteurs : E. Glenn Schellenberg ; Jaimie Poon ; Michael W. Weiss

Source :

RBID : PMC:5659795

Abstract

After only two exposures to previously unfamiliar melodies, adults remember the tunes for over a week and the key for over a day. Here, we examined the development of long-term memory for melody and key. Listeners in three age groups (7- to 8-year-olds, 9- to 11-year-olds, and adults) heard two presentations of each of 12 unfamiliar melodies. After a 10-min delay, they heard the same 12 old melodies intermixed with 12 new melodies. Half of the old melodies were transposed up or down by six semitones from initial exposure. Listeners rated how well they recognized the melodies from the exposure phase. Recognition was better for old than for new melodies, for adults compared to children, and for older compared to younger children. Recognition ratings were also higher for old melodies presented in the same key at test as exposure, and the detrimental effect of the transposition affected all age groups similarly. Although memory for melody improves with age and exposure to music, implicit memory for key appears to be adult-like by 7 years of age.


Url:
DOI: 10.1371/journal.pone.0187115
PubMed: 29077726
PubMed Central: 5659795

Links to Exploration step

PMC:5659795

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Memory for melody and key in childhood</title>
<author>
<name sortKey="Schellenberg, E Glenn" sort="Schellenberg, E Glenn" uniqKey="Schellenberg E" first="E. Glenn" last="Schellenberg">E. Glenn Schellenberg</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>Department of Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Poon, Jaimie" sort="Poon, Jaimie" uniqKey="Poon J" first="Jaimie" last="Poon">Jaimie Poon</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>Department of Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Weiss, Michael W" sort="Weiss, Michael W" uniqKey="Weiss M" first="Michael W." last="Weiss">Michael W. Weiss</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>Department of Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff002">
<addr-line>International Laboratory for Brain, Music, and Sound Research, Department of Psychology, Université de Montréal, Montréal, Québec, Canada</addr-line>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">29077726</idno>
<idno type="pmc">5659795</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5659795</idno>
<idno type="RBID">PMC:5659795</idno>
<idno type="doi">10.1371/journal.pone.0187115</idno>
<date when="2017">2017</date>
<idno type="wicri:Area/Pmc/Corpus">000F54</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000F54</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Memory for melody and key in childhood</title>
<author>
<name sortKey="Schellenberg, E Glenn" sort="Schellenberg, E Glenn" uniqKey="Schellenberg E" first="E. Glenn" last="Schellenberg">E. Glenn Schellenberg</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>Department of Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Poon, Jaimie" sort="Poon, Jaimie" uniqKey="Poon J" first="Jaimie" last="Poon">Jaimie Poon</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>Department of Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Weiss, Michael W" sort="Weiss, Michael W" uniqKey="Weiss M" first="Michael W." last="Weiss">Michael W. Weiss</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>Department of Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff002">
<addr-line>International Laboratory for Brain, Music, and Sound Research, Department of Psychology, Université de Montréal, Montréal, Québec, Canada</addr-line>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2017">2017</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>After only two exposures to previously unfamiliar melodies, adults remember the tunes for over a week and the key for over a day. Here, we examined the development of long-term memory for melody and key. Listeners in three age groups (7- to 8-year-olds, 9- to 11-year-olds, and adults) heard two presentations of each of 12 unfamiliar melodies. After a 10-min delay, they heard the same 12
<italic>old</italic>
melodies intermixed with 12
<italic>new</italic>
melodies. Half of the old melodies were transposed up or down by six semitones from initial exposure. Listeners rated how well they recognized the melodies from the exposure phase. Recognition was better for old than for new melodies, for adults compared to children, and for older compared to younger children. Recognition ratings were also higher for old melodies presented in the same key at test as exposure, and the detrimental effect of the transposition affected all age groups similarly. Although memory for melody improves with age and exposure to music, implicit memory for key appears to be adult-like by 7 years of age.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Deutsch, D" uniqKey="Deutsch D">D Deutsch</name>
</author>
<author>
<name sortKey="Deutsch, D" uniqKey="Deutsch D">D Deutsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levitin, Dj" uniqKey="Levitin D">DJ Levitin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frieler, K" uniqKey="Frieler K">K Frieler</name>
</author>
<author>
<name sortKey="Fischinger, T" uniqKey="Fischinger T">T Fischinger</name>
</author>
<author>
<name sortKey="Schlemmer, K" uniqKey="Schlemmer K">K Schlemmer</name>
</author>
<author>
<name sortKey="Lothwesen, K" uniqKey="Lothwesen K">K Lothwesen</name>
</author>
<author>
<name sortKey="Jakubowski, K" uniqKey="Jakubowski K">K Jakubowski</name>
</author>
<author>
<name sortKey="Mullensiefen, D" uniqKey="Mullensiefen D">D Müllensiefen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Stalinski, Sm" uniqKey="Stalinski S">SM Stalinski</name>
</author>
<author>
<name sortKey="Marks, Bm" uniqKey="Marks B">BM Marks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Habashi, P" uniqKey="Habashi P">P Habashi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saffran, Jr" uniqKey="Saffran J">JR Saffran</name>
</author>
<author>
<name sortKey="Griepentrog, Gj" uniqKey="Griepentrog G">GJ Griepentrog</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saffran, Jr" uniqKey="Saffran J">JR Saffran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Russo, Fa" uniqKey="Russo F">FA Russo</name>
</author>
<author>
<name sortKey="Windell, Dl" uniqKey="Windell D">DL Windell</name>
</author>
<author>
<name sortKey="Cuddy, Ll" uniqKey="Cuddy L">LL Cuddy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chin, Cs" uniqKey="Chin C">CS Chin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Nakata, T" uniqKey="Nakata T">T Nakata</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Volkova, A" uniqKey="Volkova A">A Volkova</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Plantinga, J" uniqKey="Plantinga J">J Plantinga</name>
</author>
<author>
<name sortKey="Trainor, Lj" uniqKey="Trainor L">LJ Trainor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stalinski, Sm" uniqKey="Stalinski S">SM Stalinski</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiss, Mw" uniqKey="Weiss M">MW Weiss</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
<author>
<name sortKey="Dawber, Ej" uniqKey="Dawber E">EJ Dawber</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trainor, Lj" uniqKey="Trainor L">LJ Trainor</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Newcombe, Ns" uniqKey="Newcombe N">NS Newcombe</name>
</author>
<author>
<name sortKey="Lloyd, Me" uniqKey="Lloyd M">ME Lloyd</name>
</author>
<author>
<name sortKey="Ratliff, Kr" uniqKey="Ratliff K">KR Ratliff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cowan, N" uniqKey="Cowan N">N Cowan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pressley, M" uniqKey="Pressley M">M Pressley</name>
</author>
<author>
<name sortKey="Borkowski, Jg" uniqKey="Borkowski J">JG Borkowski</name>
</author>
<author>
<name sortKey="Johnson, Cj" uniqKey="Johnson C">CJ Johnson</name>
</author>
<author>
<name sortKey="Mcdaniel, Ma" uniqKey="Mcdaniel M">MA McDaniel</name>
</author>
<author>
<name sortKey="Pressley, M" uniqKey="Pressley M">M Pressley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Best, Jr" uniqKey="Best J">JR Best</name>
</author>
<author>
<name sortKey="Miller, Ph" uniqKey="Miller P">PH Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zelazo, Pd" uniqKey="Zelazo P">PD Zelazo</name>
</author>
<author>
<name sortKey="Carlson, Sm" uniqKey="Carlson S">SM Carlson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cowan, N" uniqKey="Cowan N">N Cowan</name>
</author>
<author>
<name sortKey="Naveh Benjamin, M" uniqKey="Naveh Benjamin M">M Naveh-Benjamin</name>
</author>
<author>
<name sortKey="Kilb, A" uniqKey="Kilb A">A Kilb</name>
</author>
<author>
<name sortKey="Saults, Js" uniqKey="Saults J">JS Saults</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kail, Rv" uniqKey="Kail R">RV Kail</name>
</author>
<author>
<name sortKey="Ferrer, E" uniqKey="Ferrer E">E Ferrer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thelan, E" uniqKey="Thelan E">E Thelan</name>
</author>
<author>
<name sortKey="Smith, Le" uniqKey="Smith L">LE Smith</name>
</author>
<author>
<name sortKey="Damon, W" uniqKey="Damon W">W Damon</name>
</author>
<author>
<name sortKey="Lerner, Rm" uniqKey="Lerner R">RM Lerner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hunter, Pg" uniqKey="Hunter P">PG Hunter</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Stalinski, Sm" uniqKey="Stalinski S">SM Stalinski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiss, Mw" uniqKey="Weiss M">MW Weiss</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiss, Mw" uniqKey="Weiss M">MW Weiss</name>
</author>
<author>
<name sortKey="Vanzella, P" uniqKey="Vanzella P">P Vanzella</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
<author>
<name sortKey="Bartlett, Jc" uniqKey="Bartlett J">JC Bartlett</name>
</author>
<author>
<name sortKey="Dowling, Wj" uniqKey="Dowling W">WJ Dowling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dowling, Wj" uniqKey="Dowling W">WJ Dowling</name>
</author>
<author>
<name sortKey="Bartlett, Jc" uniqKey="Bartlett J">JC Bartlett</name>
</author>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
<author>
<name sortKey="Andrews, Mw" uniqKey="Andrews M">MW Andrews</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Corrigall, Ka" uniqKey="Corrigall K">KA Corrigall</name>
</author>
<author>
<name sortKey="Trainor, Lj" uniqKey="Trainor L">LJ Trainor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hannon, Ee" uniqKey="Hannon E">EE Hannon</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hannon, Ee" uniqKey="Hannon E">EE Hannon</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schneider, W" uniqKey="Schneider W">W Schneider</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E Bigand</name>
</author>
<author>
<name sortKey="Poulin Charronnat, B" uniqKey="Poulin Charronnat B">B Poulin-Charronnat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E Bigand</name>
</author>
<author>
<name sortKey="Poulin Charronnat, B" uniqKey="Poulin Charronnat B">B Poulin-Charronnat</name>
</author>
<author>
<name sortKey="Garnier, C" uniqKey="Garnier C">C Garnier</name>
</author>
<author>
<name sortKey="Stevens, C" uniqKey="Stevens C">C Stevens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E Bigand</name>
</author>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B Tillmann</name>
</author>
<author>
<name sortKey="Poulin, B" uniqKey="Poulin B">B Poulin</name>
</author>
<author>
<name sortKey="D Adamo, Da" uniqKey="D Adamo D">DA D'Adamo</name>
</author>
<author>
<name sortKey="Madurell, F" uniqKey="Madurell F">F Madurell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B Tillmann</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E Bigand</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N Gosselin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hauser, Md" uniqKey="Hauser M">MD Hauser</name>
</author>
<author>
<name sortKey="Mcdermott, J" uniqKey="Mcdermott J">J McDermott</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
<author>
<name sortKey="Trainor, L" uniqKey="Trainor L">L Trainor</name>
</author>
<author>
<name sortKey="Rovee Collier, Ck" uniqKey="Rovee Collier C">CK Rovee-Collier</name>
</author>
<author>
<name sortKey="Lipsitt, Lp" uniqKey="Lipsitt L">LP Lipsitt</name>
</author>
<author>
<name sortKey="Hayne, H" uniqKey="Hayne H">H Hayne</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bergeson, Tr" uniqKey="Bergeson T">TR Bergeson</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcdermott, Jh" uniqKey="Mcdermott J">JH McDermott</name>
</author>
<author>
<name sortKey="Dolan, R" uniqKey="Dolan R">R Dolan</name>
</author>
<author>
<name sortKey="Sharot, T" uniqKey="Sharot T">T Sharot</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bruckert, L" uniqKey="Bruckert L">L Bruckert</name>
</author>
<author>
<name sortKey="Bestelmeyer, P" uniqKey="Bestelmeyer P">P Bestelmeyer</name>
</author>
<author>
<name sortKey="Latinus, M" uniqKey="Latinus M">M Latinus</name>
</author>
<author>
<name sortKey="Rouger, J" uniqKey="Rouger J">J Rouger</name>
</author>
<author>
<name sortKey="Charest, I" uniqKey="Charest I">I Charest</name>
</author>
<author>
<name sortKey="Rousselet, Ga" uniqKey="Rousselet G">GA Rousselet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Juslin, Pn" uniqKey="Juslin P">PN Juslin</name>
</author>
<author>
<name sortKey="Laukka, P" uniqKey="Laukka P">P Laukka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
<author>
<name sortKey="Bartlett, Jc" uniqKey="Bartlett J">JC Bartlett</name>
</author>
<author>
<name sortKey="Jones, Mr" uniqKey="Jones M">MR Jones</name>
</author>
<author>
<name sortKey="Fay, Rr" uniqKey="Fay R">RR Fay</name>
</author>
<author>
<name sortKey="Popper, An" uniqKey="Popper A">AN Popper</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcauley, Jd" uniqKey="Mcauley J">JD McAuley</name>
</author>
<author>
<name sortKey="Stevens, C" uniqKey="Stevens C">C Stevens</name>
</author>
<author>
<name sortKey="Humphreys, Ms" uniqKey="Humphreys M">MS Humphreys</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
<author>
<name sortKey="Mullensiefen, D" uniqKey="Mullensiefen D">D Müllensiefen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiss, Mw" uniqKey="Weiss M">MW Weiss</name>
</author>
<author>
<name sortKey="Trehub, Se" uniqKey="Trehub S">SE Trehub</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
<author>
<name sortKey="Bartlett, Jc" uniqKey="Bartlett J">JC Bartlett</name>
</author>
<author>
<name sortKey="Dowling, Wj" uniqKey="Dowling W">WJ Dowling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schellenberg, Ef" uniqKey="Schellenberg E">EF Schellenberg</name>
</author>
<author>
<name sortKey="Moreno, S" uniqKey="Moreno S">S Moreno</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Andrade, Pe" uniqKey="Andrade P">PE Andrade</name>
</author>
<author>
<name sortKey="Vanzella, P" uniqKey="Vanzella P">P Vanzella</name>
</author>
<author>
<name sortKey="Andrade, Ov" uniqKey="Andrade O">OV Andrade</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, CA USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">29077726</article-id>
<article-id pub-id-type="pmc">5659795</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0187115</article-id>
<article-id pub-id-type="publisher-id">PONE-D-17-18227</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognition</subject>
<subj-group>
<subject>Memory</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Learning and Memory</subject>
<subj-group>
<subject>Memory</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>People and Places</subject>
<subj-group>
<subject>Population Groupings</subject>
<subj-group>
<subject>Age Groups</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Hearing</subject>
<subj-group>
<subject>Pitch Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Hearing</subject>
<subj-group>
<subject>Pitch Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Hearing</subject>
<subj-group>
<subject>Pitch Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognition</subject>
<subj-group>
<subject>Memory</subject>
<subj-group>
<subject>Long-Term Memory</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Learning and Memory</subject>
<subj-group>
<subject>Memory</subject>
<subj-group>
<subject>Long-Term Memory</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>People and Places</subject>
<subj-group>
<subject>Population Groupings</subject>
<subj-group>
<subject>Age Groups</subject>
<subj-group>
<subject>Children</subject>
<subj-group>
<subject>Infants</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>People and Places</subject>
<subj-group>
<subject>Population Groupings</subject>
<subj-group>
<subject>Families</subject>
<subj-group>
<subject>Children</subject>
<subj-group>
<subject>Infants</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>People and Places</subject>
<subj-group>
<subject>Population Groupings</subject>
<subj-group>
<subject>Age Groups</subject>
<subj-group>
<subject>Children</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>People and Places</subject>
<subj-group>
<subject>Population Groupings</subject>
<subj-group>
<subject>Families</subject>
<subj-group>
<subject>Children</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Memory for melody and key in childhood</article-title>
<alt-title alt-title-type="running-head">Memory for melody and key in childhood</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<contrib-id authenticated="true" contrib-id-type="orcid">http://orcid.org/0000-0003-3681-6020</contrib-id>
<name>
<surname>Schellenberg</surname>
<given-names>E. Glenn</given-names>
</name>
<role content-type="http://credit.casrai.org/">Conceptualization</role>
<role content-type="http://credit.casrai.org/">Formal analysis</role>
<role content-type="http://credit.casrai.org/">Funding acquisition</role>
<role content-type="http://credit.casrai.org/">Methodology</role>
<role content-type="http://credit.casrai.org/">Project administration</role>
<role content-type="http://credit.casrai.org/">Supervision</role>
<role content-type="http://credit.casrai.org/">Writing – original draft</role>
<role content-type="http://credit.casrai.org/">Writing – review & editing</role>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Poon</surname>
<given-names>Jaimie</given-names>
</name>
<role content-type="http://credit.casrai.org/">Data curation</role>
<role content-type="http://credit.casrai.org/">Methodology</role>
<role content-type="http://credit.casrai.org/">Writing – review & editing</role>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Weiss</surname>
<given-names>Michael W.</given-names>
</name>
<role content-type="http://credit.casrai.org/">Data curation</role>
<role content-type="http://credit.casrai.org/">Formal analysis</role>
<role content-type="http://credit.casrai.org/">Supervision</role>
<role content-type="http://credit.casrai.org/">Writing – review & editing</role>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff001">
<label>1</label>
<addr-line>Department of Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada</addr-line>
</aff>
<aff id="aff002">
<label>2</label>
<addr-line>International Laboratory for Brain, Music, and Sound Research, Department of Psychology, Université de Montréal, Montréal, Québec, Canada</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Jäncke</surname>
<given-names>Lutz</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>University of Zurich, SWITZERLAND</addr-line>
</aff>
<author-notes>
<fn fn-type="COI-statement" id="coi001">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<corresp id="cor001">* E-mail:
<email>g.schellenberg@utoronto.ca</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>27</day>
<month>10</month>
<year>2017</year>
</pub-date>
<pub-date pub-type="collection">
<year>2017</year>
</pub-date>
<volume>12</volume>
<issue>10</issue>
<elocation-id>e0187115</elocation-id>
<history>
<date date-type="received">
<day>11</day>
<month>5</month>
<year>2017</year>
</date>
<date date-type="accepted">
<day>13</day>
<month>10</month>
<year>2017</year>
</date>
</history>
<permissions>
<copyright-statement>© 2017 Schellenberg et al</copyright-statement>
<copyright-year>2017</copyright-year>
<copyright-holder>Schellenberg et al</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="pone.0187115.pdf"></self-uri>
<abstract>
<p>After only two exposures to previously unfamiliar melodies, adults remember the tunes for over a week and the key for over a day. Here, we examined the development of long-term memory for melody and key. Listeners in three age groups (7- to 8-year-olds, 9- to 11-year-olds, and adults) heard two presentations of each of 12 unfamiliar melodies. After a 10-min delay, they heard the same 12
<italic>old</italic>
melodies intermixed with 12
<italic>new</italic>
melodies. Half of the old melodies were transposed up or down by six semitones from initial exposure. Listeners rated how well they recognized the melodies from the exposure phase. Recognition was better for old than for new melodies, for adults compared to children, and for older compared to younger children. Recognition ratings were also higher for old melodies presented in the same key at test as exposure, and the detrimental effect of the transposition affected all age groups similarly. Although memory for melody improves with age and exposure to music, implicit memory for key appears to be adult-like by 7 years of age.</p>
</abstract>
<funding-group>
<award-group id="award001">
<funding-source>
<institution-wrap>
<institution-id institution-id-type="funder-id">http://dx.doi.org/10.13039/501100000038</institution-id>
<institution>Natural Sciences and Engineering Research Council of Canada</institution>
</institution-wrap>
</funding-source>
<principal-award-recipient>
<contrib-id authenticated="true" contrib-id-type="orcid">http://orcid.org/0000-0003-3681-6020</contrib-id>
<name>
<surname>Schellenberg</surname>
<given-names>E. Glenn</given-names>
</name>
</principal-award-recipient>
</award-group>
<funding-statement>This work was supported by the Natural Sciences and Engineering Research council of Canada (EGS). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<fig-count count="2"></fig-count>
<table-count count="0"></table-count>
<page-count count="11"></page-count>
</counts>
<custom-meta-group>
<custom-meta id="data-availability">
<meta-name>Data Availability</meta-name>
<meta-value>All relevant data are within the paper and its Supporting Information files.</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
<notes>
<title>Data Availability</title>
<p>All relevant data are within the paper and its Supporting Information files.</p>
</notes>
</front>
<body>
<sec sec-type="intro" id="sec001">
<title>Introduction</title>
<p>Melodies are abstractions, based on relations between consecutive notes in terms of pitch and time. Accordingly, changes in so-called “surface” features (i.e. key, timbre, and tempo) do not alter a melody’s identity even when they are readily perceptible. For example, adults easily identify a familiar song (e.g.,
<italic>Yankee Doodle</italic>
) presented in a novel key, at a novel tempo, and in an unfamiliar timbre. The term relative pitch (RP) refers to the fact that memory for the pitches of melodies is based on relations (i.e., intervals) between consecutive tones, rather than actual pitch values. As such, the relation between tones with fundamental frequencies of 200 and 300 Hz is the same as that between tones of 300 and 450 Hz. In both instances, the higher tone has a frequency 1.5 times that of the lower tone (i.e., the difference in pitch between the first and second
<italic>Twinkles</italic>
of
<italic>Twinkle Twinkle Little</italic>
Star). RP allows individuals with no music training, who have no explicit knowledge of musical intervals, to recognize a familiar melody presented in a novel key, and to perceive when one note of a familiar song is sung sharp (too high) or flat (too low) in relation to tones that precede or follow it.</p>
<p>Absolute pitch (AP), by contrast, refers to the rare ability to remember the specific pitch of musical tones (e.g., middle C). Individuals with AP can produce and label individual tones presented in isolation, which means that they have memory for specific pitches
<italic>and</italic>
their names [
<xref rid="pone.0187115.ref001" ref-type="bibr">1</xref>
]. Nevertheless, even individuals without AP and no music training have memory for
<italic>key</italic>
, which refers to the pitch level of a piece of music rather than a single tone. For example, musically untrained adults are better than chance at identifying the original key of a familiar recording when contrasted with the same recording shifted by only one or two semitones [
<xref rid="pone.0187115.ref002" ref-type="bibr">2</xref>
]. When asked to sing a familiar pop song, most adults sing in a key that is close to the original pitch [
<xref rid="pone.0187115.ref003" ref-type="bibr">3</xref>
,
<xref rid="pone.0187115.ref004" ref-type="bibr">4</xref>
]. In the present report, we use the term “key” interchangeably with pitch level. Accordingly, an octave transposition of a melody represents a 12-semitone change in pitch level (and key in this context), even though in
<italic>musical</italic>
terms the key would not change.</p>
<p>Adults also remember the key of novel melodies heard for the first time in the laboratory [
<xref rid="pone.0187115.ref005" ref-type="bibr">5</xref>
]. In one study [
<xref rid="pone.0187115.ref006" ref-type="bibr">6</xref>
], musically trained and untrained adults listened to a set of unfamiliar melodies, with each melody presented twice. After a delay of 10 min, 1 day, or 1 week, they heard the same (old) melodies as well as an equal number of novel melodies. Half of the old melodies were transposed upward or downward by six semitones (6 ST). Higher recognition ratings for old than for new melodies confirmed that listeners remembered the pitch relations that defined the tunes. In fact, there was no evidence of forgetting over the course of the week. Higher ratings for old melodies presented in the original key—compared to those that were transposed—confirmed that listeners also implicitly remembered the key. The negative effect of the transposition was evident after a 10-min and 1-day delay, but not after 1 week. In short, the results revealed two important findings: (1) adult listeners’ memory for melodies is relatively immune to changes in the delay between exposure and test, and (2) their mental representations include information about key for more than one but fewer than seven days.</p>
<p>In the present investigation, we asked whether memory for melody changes with age. Specific questions included: Do children form long-term memories for previously unfamiliar melodies rapidly, as adults do? Is memory for key evident in young children? If so, is memory for key stronger, weaker, or the same in childhood as it is in adulthood? These issues are important theoretically because scholars have proposed that infants begin life perceiving pitch absolutely [
<xref rid="pone.0187115.ref007" ref-type="bibr">7</xref>
,
<xref rid="pone.0187115.ref008" ref-type="bibr">8</xref>
]. A change to RP processing is thought to occur during childhood, when familiar songs are heard in different keys, as, for example, when a father sings the same song much lower than a mother. One consequence is the putative rarity of AP, and the related notion that very few listeners form mental representations of music that contain absolute information about pitch.</p>
<p>In line with this perspective, 5-year-olds outperform adults in a task that requires them to identify a “special note” from a set of seven notes [
<xref rid="pone.0187115.ref009" ref-type="bibr">9</xref>
]. Infants also attend preferentially to the actual tones in a statistical learning task, whereas adults attend preferentially to relations between consecutive tones [
<xref rid="pone.0187115.ref007" ref-type="bibr">7</xref>
,
<xref rid="pone.0187115.ref008" ref-type="bibr">8</xref>
]. The bias seen in infancy and childhood may be maintained if formal music training begins early in life. In fact, most individuals with AP started music lessons before the age of 7 [
<xref rid="pone.0187115.ref010" ref-type="bibr">10</xref>
]. Nevertheless, most individuals with early music training do
<italic>not</italic>
have AP, which contributes to its rarity and implicates a role for genetics or some other learning experience, such as speaking a tone language [
<xref rid="pone.0187115.ref001" ref-type="bibr">1</xref>
].</p>
<p>A different perspective holds that individuals perceive pitch absolutely and relatively across the lifespan. As with adults [
<xref rid="pone.0187115.ref002" ref-type="bibr">2</xref>
], 9- to 12-year-olds can identify the original key of familiar recordings [
<xref rid="pone.0187115.ref011" ref-type="bibr">11</xref>
]. Earlier in development, increases in age are predictive of
<italic>better</italic>
memory for key [
<xref rid="pone.0187115.ref012" ref-type="bibr">12</xref>
], although even infants remember the key of familiar melodies when natural stimuli are used (e.g., sung lullabies) [
<xref rid="pone.0187115.ref013" ref-type="bibr">13</xref>
], but not if the melodies are created electronically [
<xref rid="pone.0187115.ref014" ref-type="bibr">14</xref>
]. Other evidence documents that infants remember the pitch relations that define melodies, just as older individuals do. For example, in one study, 6-month-olds were exposed daily to a piano melody over the course of a week [
<xref rid="pone.0187115.ref014" ref-type="bibr">14</xref>
]. On the eighth day, they demonstrated a looking-time preference for a novel melody over the familiarized melody, even when the familiar melody was transposed.</p>
<p>Stalinski and Schellenberg [
<xref rid="pone.0187115.ref015" ref-type="bibr">15</xref>
] attempted to reconcile these different views by documenting that listeners of all ages perceive pitch absolutely
<italic>and</italic>
relatively, although a bias for absolute processing in early childhood becomes a bias for relative processing in adulthood. Five- to 12-year-olds and adults listened to pairs of melodies that were the same, different (reordered tones of the same melody), transposed, or different
<italic>and</italic>
transposed. Participants rated the similarity of each pair. Listeners of all ages rated transposed melodies and melodic changes to be less similar than identical melodies. For younger children, however, the transposition was more salient than the melodic change. With increases in age, the melodic change gradually became more salient than the transposition. By adulthood, a different melody was equally dissimilar whether it was transposed or not.</p>
<p>We know that infants and children remember the key of recordings they have heard multiple times in the same key [
<xref rid="pone.0187115.ref011" ref-type="bibr">11</xref>
<xref rid="pone.0187115.ref013" ref-type="bibr">13</xref>
], and that children remember novel melodies heard for the first time in the laboratory [
<xref rid="pone.0187115.ref016" ref-type="bibr">16</xref>
]. It is unclear, however, whether children remember the key of previously unfamiliar tunes. In the present study, we predicted that long-term melodic memory would improve with age, as in previous research that reported improvements in melody recognition from 5 to 10 years of age [
<xref rid="pone.0187115.ref016" ref-type="bibr">16</xref>
]. During middle childhood, implicit knowledge of conventions governing Western melodies improves with age [
<xref rid="pone.0187115.ref017" ref-type="bibr">17</xref>
], as does episodic memory in general [
<xref rid="pone.0187115.ref018" ref-type="bibr">18</xref>
], along with related improvements in working memory [
<xref rid="pone.0187115.ref019" ref-type="bibr">19</xref>
], mnemonic strategies [
<xref rid="pone.0187115.ref020" ref-type="bibr">20</xref>
], executive functions [
<xref rid="pone.0187115.ref021" ref-type="bibr">21</xref>
,
<xref rid="pone.0187115.ref022" ref-type="bibr">22</xref>
], feature binding [
<xref rid="pone.0187115.ref023" ref-type="bibr">23</xref>
], and processing speed [
<xref rid="pone.0187115.ref024" ref-type="bibr">24</xref>
]. Dynamic Systems Theory holds that different aspects of behavior mature at different rates [
<xref rid="pone.0187115.ref025" ref-type="bibr">25</xref>
], such that multiple factors could lead to age-related differences in memory for key.</p>
<p>The children in the present study were 7 to 11 years of age. By age 7, children demonstrate an adult-like advantage for recognition of melodies presented in a vocal compared to an instrumental timbre [
<xref rid="pone.0187115.ref016" ref-type="bibr">16</xref>
]. Earlier in development, however, vocal melodies tend to be falsely “recognized” even when they are heard for the first time [
<xref rid="pone.0187115.ref016" ref-type="bibr">16</xref>
]. Thus, we did not test younger children, who focus on surface features at the expense of relations that define a melody. We tested a comparison group of adults, however, in order to document mature listeners’ performance with our stimulus melodies. We expected to replicate the finding that transposing the melodies from exposure to test would negatively impact recognition for adults [
<xref rid="pone.0187115.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0187115.ref006" ref-type="bibr">6</xref>
]. For children, the available literature allowed for three possibilities: compared to adults, the transposition could have a stronger detrimental effect, a weaker effect, or a similar effect.</p>
</sec>
<sec sec-type="materials|methods" id="sec002">
<title>Materials and methods</title>
<p>The study was approved by the Research Ethics Board of the University of Toronto.</p>
<sec id="sec003">
<title>Participants</title>
<p>The sample had 98 listeners recruited from a middle-class Canadian suburb. Participants were sampled from three age groups without regard to gender or music training. There were 32 7- to 8-year-olds (14 boys, 18 girls), 34 9- to 11-year-olds (14 boys, 20 girls), and 32 adults (5 men, 27 women). All participants had normal hearing, according to parent- and self-reports for children and adults, respectively. These age groups were chosen based on developmental changes in melodic processing, which have been documented with measures of perceptual similarity [
<xref rid="pone.0187115.ref015" ref-type="bibr">15</xref>
], recognition [
<xref rid="pone.0187115.ref016" ref-type="bibr">16</xref>
], and perceived emotion [
<xref rid="pone.0187115.ref026" ref-type="bibr">26</xref>
]. The group of younger children had a smaller age range than the older group because developmental change is particularly rapid earlier in life.</p>
<p>The 7- to 8-year-olds, hereafter
<italic>younger children</italic>
, had a mean age of 98 months (
<italic>SD</italic>
= 5); the 9- to 11-year-olds, hereafter
<italic>older children</italic>
, had a mean age of 132 months (
<italic>SD</italic>
= 11). The younger children had 1.0 years of music lessons on average (
<italic>SD</italic>
= 1.4); the older children had 1.7 years (
<italic>SD</italic>
= 1.8). In both instances, the distribution of music lessons was skewed positively. Music training was included as a between-subjects dummy variable in the statistical analyses because it can predict better long-term memory for novel melodies [
<xref rid="pone.0187115.ref006" ref-type="bibr">6</xref>
,
<xref rid="pone.0187115.ref027" ref-type="bibr">27</xref>
,
<xref rid="pone.0187115.ref028" ref-type="bibr">28</xref>
]. Specifically, younger children were considered to be musically trained if they had at least 1 year of music lessons (
<italic>n</italic>
= 13). Older children were classified as musically trained if they had at least 2 years (
<italic>n</italic>
= 15). An additional five children were recruited and tested but excluded from the final sample due to technical difficulties (
<italic>n</italic>
= 2) or experimental error (
<italic>n</italic>
= 3). Children received a gift card as a means of thanking families for participating.</p>
<p>Adults (age range: 16–20 years) were recruited from a freshman psychology course and received partial course credit. They had, on average, 5.9 years of music lessons (
<italic>SD</italic>
= 7.0). As with the older children, the adults were considered to be musically trained if they had at least 2 years of music lessons (
<italic>n</italic>
= 23). This classification scheme has been used previously when adults were recruited without regard to music training [
<xref rid="pone.0187115.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0187115.ref006" ref-type="bibr">6</xref>
,
<xref rid="pone.0187115.ref029" ref-type="bibr">29</xref>
,
<xref rid="pone.0187115.ref030" ref-type="bibr">30</xref>
].</p>
<p>More detailed information about music training for the three age groups is provided in
<xref ref-type="supplementary-material" rid="pone.0187115.s001">S1 File</xref>
.</p>
</sec>
<sec id="sec004">
<title>Stimuli</title>
<p>The 24 stimulus melodies were originally excerpted from collections of British and Irish folk tunes, recorded with MIDI in a piano timbre, and chosen because they were tonal (Western) sounding, relatively simple, yet unfamiliar [
<xref rid="pone.0187115.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0187115.ref006" ref-type="bibr">6</xref>
]. To accommodate the attention span of children, we reduced the duration of the test session by half (to 30 min) by shortening the melodies, which were approximately 30 s each (12–16 measures, three to four phrases), to 13–19 s, typically by eliminating the third and fourth phrases. Each melody was saved as a high-quality sound file twice: once with a median pitch of G4 (the G above middle C), and again with a median pitch of C#5 (6 semitones higher). Centering melodies in this manner ensured that different stimulus melodies at the low (or high) pitch level had notes drawn from different scales.</p>
</sec>
<sec id="sec005">
<title>Procedure</title>
<p>Participants were tested individually in a double-walled sound-attenuating booth. They sat in front of an iMac computer, which presented stimuli and recorded responses via customized software created with PsyScript [
<xref rid="pone.0187115.ref031" ref-type="bibr">31</xref>
]. The melodies were presented over headphones (Sony MDR-NC6). The procedure was similar to that used previously [
<xref rid="pone.0187115.ref005" ref-type="bibr">5</xref>
]. Before the test session began, participants heard multiple versions of
<italic>Happy Birthday</italic>
that differed in performance (surface) features (i.e., key and tempo), to demonstrate that a melody’s identity is invariant across such changes. Participants of all ages readily understood the point. The actual test session had an exposure phase, followed by a brief break and a recognition phase. In both phases, trials were self-paced by pressing the space bar, and ratings were made on 6-point numerical scales. The experimenter sat with children in the booth. Adult participants were tested independently.</p>
<p>In the exposure phase, participants heard 12 melodies, selected randomly from the set of 24. Each was presented twice, in one random order followed by a second random order (no direct repetitions). Half of the melodies (selected randomly) were presented in the lower key. The other half were presented in the higher key. For each presentation of each melody, participants were required to provide a liking rating to ensure that they listened to each melody.</p>
<p>In the recognition phase, participants heard all 24 melodies in random order. Their task was to rate whether they heard the tunes during the first part of the study. They were reminded that a tune’s identity is independent of changes in performance. The 12 new melodies were divided equally and randomly between the lower and higher keys. Six of the old melodies, selected randomly, were presented in the same key at test as exposure. The other six were transposed, with three of the previously higher and lower melodies (selected randomly) presented in the lower and higher keys, respectively. The design ensured that differences among melodies in inherent memorability did not influence the results, and that overall pitch height could not be used to identify whether a melody was old or transposed. For child participants, pictures were used instead of numbers to label the rating scale, with the smallest representing “definitely new” (corresponding to a rating of 1), and the largest representing “definitely old” (corresponding to a rating of 6). For adults, the rating scale had words and numbers. Between the exposure and test phases, there was a break of approximately 10 min during which children drew pictures with the experimenter, and adults completed a questionnaire that asked for demographic information and history of music training. A similar questionnaire was completed by a parent for each child.</p>
</sec>
</sec>
<sec sec-type="results" id="sec006">
<title>Results</title>
<p>For each listener, recognition ratings were converted to scores that measured area under the curve (AUC) [
<xref rid="pone.0187115.ref006" ref-type="bibr">6</xref>
]. AUC scores provide a bias-free measure of recognition accuracy. Recognition is considered perfect (AUC score = 1.0) when all ratings for old melodies are higher than all ratings for new melodies. For example, a listener has perfect recognition if all old melodies are rated 5 or 6 and all new melodies are rated 1 or 2 (no bias), or if all old melodies are rated 2 and all new melodies are rated 1 (a conservative bias). In both instances, ratings perfectly distinguish old from new melodies. A score of 0.5 indicates chance performance, such that ratings for old and new melodies are indistinguishable (no recognition).</p>
<p>An overall AUC score was calculated for each participant. Descriptive statistics are illustrated in
<xref ref-type="fig" rid="pone.0187115.g001">Fig 1</xref>
. Data are provided in
<xref ref-type="supplementary-material" rid="pone.0187115.s002">S2 File</xref>
. One-sample
<italic>t</italic>
-tests confirmed that performance was significantly better than chance for adults,
<italic>t</italic>
(31) = 17.01,
<italic>p</italic>
< .001, older children,
<italic>t</italic>
(33) = 10.95,
<italic>p</italic>
< .001, and younger children,
<italic>t</italic>
(31) = 6.66,
<italic>p</italic>
< .001. A three-way ANOVA, with age group, gender, and music training as independent variables, confirmed that there was a difference between groups,
<italic>F</italic>
(2, 86) = 7.08,
<italic>p</italic>
= .001, η
<sup>2</sup>
= .112. Adults performed better than older children,
<italic>p</italic>
= .036, and younger children,
<italic>p</italic>
< .001, and older children performed better than younger children,
<italic>p</italic>
= .013 (Tukey’s test). Females also outperformed males,
<italic>F</italic>
(1, 86) = 8.31,
<italic>p</italic>
= .005, η
<sup>2</sup>
= .065. There was no main effect of music training,
<italic>p</italic>
> .2, no two-way interactions,
<italic>F</italic>
s < 1, and no three-way interaction,
<italic>p</italic>
> .2.</p>
<fig id="pone.0187115.g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0187115.g001</object-id>
<label>Fig 1</label>
<caption>
<title>Mean overall AUC scores as a function of age group and gender.</title>
<p>Adults recognized the melodies better than older children, who had better recognition than younger children. Across age groups, female participants had better recognition than males. Error bars are standard errors.</p>
</caption>
<graphic xlink:href="pone.0187115.g001"></graphic>
</fig>
<p>We then asked whether some old melodies were remembered better than others, specifically if those presented in the same key at test and exposure were recognized better than those that were transposed. For each listener, we calculated AUC scores separately for old-same and old-transposed melodies. Descriptive statistics are illustrated in
<xref ref-type="fig" rid="pone.0187115.g002">Fig 2</xref>
. A mixed-design ANOVA with key (same or transposed) as a repeated measure and age group, gender, and music training as between-subjects variables uncovered a main effect of key,
<italic>F</italic>
(1, 86) = 23.94,
<italic>p</italic>
< .001, partial η
<sup>2</sup>
= .218. Old melodies that were reheard in their original key (
<italic>M</italic>
= .79,
<italic>SD</italic>
= .16) were remembered better than their transposed counterparts (
<italic>M</italic>
= .71,
<italic>SD</italic>
= .16). Main effects of age,
<italic>p</italic>
= .001, and gender,
<italic>p</italic>
= .005, were again evident, but there was no hint of a two-way interaction between key-change and age, and no other two-way interactions,
<italic>F</italic>
s < 1. In other words, transposing melodies from exposure to test had a similar detrimental effect on recognition for all listeners. There were also no higher-order interactions,
<italic>p</italic>
s > .1.</p>
<fig id="pone.0187115.g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0187115.g002</object-id>
<label>Fig 2</label>
<caption>
<title>Mean AUC scores for same-key and transposed melodies as a function of age group.</title>
<p>Although melody recognition improved with age, the detrimental effect of the transposition was similar across age groups. Error bars are standard errors.</p>
</caption>
<graphic xlink:href="pone.0187115.g002"></graphic>
</fig>
<p>Finally, correlational analyses revealed that the magnitude of the detrimental effect of the key change (AUC same—ACU transposed) was unrelated to memory for melodies (AUC overall scores) for the entire sample,
<italic>p</italic>
> .7, and when each of the three age groups was examined separately,
<italic>p</italic>
s > .1.</p>
</sec>
<sec sec-type="conclusions" id="sec007">
<title>Discussion</title>
<p>We examined long-term memory for melody and key in children and adults. As expected, adults recognized previously unfamiliar melodies better than 9- to 11-year-olds, who had better recognition than 7- to 8-year-olds. This developmental progression is likely to stem from increased informal exposure to music, and from cognitive development more generally. Although it is impossible to tease apart effects of informal exposure to music from those of maturity, by 4 years of age, musically untrained children demonstrate Western-specific knowledge of key and harmony [
<xref rid="pone.0187115.ref032" ref-type="bibr">32</xref>
], which must be learned. Similarly, Western 6-month-olds are equally adept at processing and remembering meters that are common or uncommon in their musical environment [
<xref rid="pone.0187115.ref033" ref-type="bibr">33</xref>
], but by 12 months of age, performance is better with the common meters [
<xref rid="pone.0187115.ref034" ref-type="bibr">34</xref>
]. In short, passive exposure leads to enhanced processing of music that is relevant in a listener’s native environment. Better encoding and retrieval of our stimulus melodies would also stem from age-related improvements in general cognitive abilities, including long-term, working, and short-term memory [
<xref rid="pone.0187115.ref035" ref-type="bibr">35</xref>
], executive functions [
<xref rid="pone.0187115.ref021" ref-type="bibr">21</xref>
,
<xref rid="pone.0187115.ref022" ref-type="bibr">22</xref>
], feature binding [
<xref rid="pone.0187115.ref023" ref-type="bibr">23</xref>
], and processing speed [
<xref rid="pone.0187115.ref024" ref-type="bibr">24</xref>
].</p>
<p>In prior research, adult listeners recognized previously unfamiliar melodies equally well whether the delay between exposure and test was 10 min, 1 day, or 1 week, and this result was evident in three different samples, each with approximately 100 participants [
<xref rid="pone.0187115.ref006" ref-type="bibr">6</xref>
]. Future research could determine if younger listeners show the same continuity over time. If so, developmental differences in memory for melodies would be similar regardless of the delay. Alternatively, age-related advantages in recognition could become larger or smaller as the delay increases.</p>
<p>The other major finding was that implicit memory for key was evident in all age groups. Old melodies that were in the same key at exposure
<italic>and</italic>
test were recognized better than melodies that were transposed. Moreover, the transposition negatively affected recognition similarly regardless of age, gender, or music training. This finding is consistent with results from previous studies that examined memory for the key of familiar recordings heard multiple times over a relatively long time period. Although methods and stimuli varied across studies, performance in absolute terms was similar for children and adults [
<xref rid="pone.0187115.ref002" ref-type="bibr">2</xref>
,
<xref rid="pone.0187115.ref011" ref-type="bibr">11</xref>
,
<xref rid="pone.0187115.ref012" ref-type="bibr">12</xref>
].</p>
<p>Implicit memory for key was unrelated to explicit memory for melody, which implicates separate developmental processes. In general, implicit musical knowledge appears early in development and is relatively impervious to individual differences in age and music training. For example, when Western listeners hear a sequence of chords in an established musical key, they expect the final chord to be the tonic (the chord based on
<italic>do</italic>
). Accordingly, when asked to judge whether the final chord is performed with a specific timbre, sung with a particular vowel, or consonant or dissonant, performance is slower or less accurate when the chord is the sub-dominant (based on
<italic>fa</italic>
) instead of the tonic [
<xref rid="pone.0187115.ref036" ref-type="bibr">36</xref>
]. Such
<italic>harmonic priming</italic>
effects are evident in musically trained and untrained children [
<xref rid="pone.0187115.ref037" ref-type="bibr">37</xref>
] and adults [
<xref rid="pone.0187115.ref038" ref-type="bibr">38</xref>
], and even in a patient with a severe deficit in music perception (i.e.,
<italic>amusia</italic>
) [
<xref rid="pone.0187115.ref039" ref-type="bibr">39</xref>
]. Although children find key changes more salient than do their adult counterparts [
<xref rid="pone.0187115.ref015" ref-type="bibr">15</xref>
], such age differences may be limited to tasks that rely on short-term memory, when the transposition in a same-different task is particularly noticeable, and to other measures of explicit musical knowledge.</p>
<p>Why would young listeners have implicit memory for key? On the one hand, memory for key seems counter-productive (i.e., a failure to generalize) because the identity of a melody is independent of pitch level. Moreover, nonhuman species tend to perceive and remember specific pitches rather than pitch relations [
<xref rid="pone.0187115.ref040" ref-type="bibr">40</xref>
]. On the other hand, by middle childhood, familiar recordings (e.g., themes from TV shows) have been heard multiple times at exactly the same pitch level. Singing to infants is also a universal behavior [
<xref rid="pone.0187115.ref041" ref-type="bibr">41</xref>
], and there is more pitch consistency in infant-directed singing than in infant-directed speech, with songs varying by less than one semitone on average from one week to the next [
<xref rid="pone.0187115.ref042" ref-type="bibr">42</xref>
]. Thus, memory for pitch level (or key) would enhance memory for auditory signals in general and for a caregiver’s voice in particular. Additional evidence suggests that encoding invariant features of vocalizations offered an evolutionary advantage to our ancestors. For example, identifying a speaker’s voice involves encoding pitch and formant information, which contribute to evaluations of the speaker’s attractiveness [
<xref rid="pone.0187115.ref043" ref-type="bibr">43</xref>
]. Averaging utterances across voices yields a more attractive utterance, in part due to clearer pitches brought about from reduction of aperiodic noise, but also because shifting the pitch of a voice toward the average pitch of its gender increases attractiveness [
<xref rid="pone.0187115.ref044" ref-type="bibr">44</xref>
]. An unusually high pitch can also be used to identify when another person is angry, scared, or happy, whereas a low pitch indicates sadness or tenderness [
<xref rid="pone.0187115.ref045" ref-type="bibr">45</xref>
]; effective use of these cues requires memory for the speaker’s baseline pitch. In short, music listening may co-opt adaptive mechanisms associated with processing vocalizations, mate selection, and identifying speakers and their intentions.</p>
<p>We found no effect of music training on memory for melody or key. In previous research, music training had marginal or inconsistent associations with long-term memory for melodies but not for key [
<xref rid="pone.0187115.ref005" ref-type="bibr">5</xref>
,
<xref rid="pone.0187115.ref006" ref-type="bibr">6</xref>
]. In other studies that examined long-term memory for melody with no transpositions from exposure to test [
<xref rid="pone.0187115.ref046" ref-type="bibr">46</xref>
], similarly inconsistent results emerged, with better recognition for musically trained than untrained participants in some instances [
<xref rid="pone.0187115.ref006" ref-type="bibr">6</xref>
,
<xref rid="pone.0187115.ref027" ref-type="bibr">27</xref>
,
<xref rid="pone.0187115.ref028" ref-type="bibr">28</xref>
] but not in others [
<xref rid="pone.0187115.ref047" ref-type="bibr">47</xref>
<xref rid="pone.0187115.ref049" ref-type="bibr">49</xref>
]. In tests of short-term memory for melodies presented in transposition, however, music training predicts good performance more reliably [
<xref rid="pone.0187115.ref029" ref-type="bibr">29</xref>
,
<xref rid="pone.0187115.ref050" ref-type="bibr">50</xref>
]. Thus, Halpern and Bartlett’s [
<xref rid="pone.0187115.ref046" ref-type="bibr">46</xref>
] review of melodic memory concluded that recognition advantages based on music training are most likely to be evident in short-term memory tasks that require the listener to make relatively fine but musically relevant distinctions. It is nevertheless possible that effects of music training could emerge on the present task if highly trained musicians were recruited. These individuals tend to have enhanced RP perception, which allows them to detect very small mistunings to melodies [
<xref rid="pone.0187115.ref051" ref-type="bibr">51</xref>
]. It is therefore possible that professional musicians might show less of a decrement (or no decrement) in recognition when a melody is transposed from exposure to test.</p>
<p>Gender was associated with memory for melody but not with memory for key, with females exhibiting better melody recognition than males across age groups (
<xref ref-type="fig" rid="pone.0187115.g001">Fig 1</xref>
). In previous research, female advantages emerged in some music tasks, such as those that require young children to identify emotions conveyed musically [
<xref rid="pone.0187115.ref026" ref-type="bibr">26</xref>
]. These associations are not always reliable [
<xref rid="pone.0187115.ref052" ref-type="bibr">52</xref>
], however, and can disappear among older children and adults [
<xref rid="pone.0187115.ref026" ref-type="bibr">26</xref>
]. In any event, gender was of no theoretical interest. Moreover, gender had no bearing on implicit memory for key, which appears to be remarkably consistent across individuals.</p>
<p>The present study is not without limitations. For example, the stimulus melodies were drawn from a single musical genre, such that it is unclear whether the results would generalize to stimuli drawn from other Western genres or from non-Western music. The sample was similarly restricted, to individuals from middle-class families from a Canadian suburb. Finally, as noted above, the delay between exposure and test was short (approximately 10 min) so that participants needed to visit the laboratory only once. It is therefore unknown whether children’s memory for key would be evident after longer delays, as it is for adults [
<xref rid="pone.0187115.ref006" ref-type="bibr">6</xref>
].</p>
<p>In sum, the present findings converge with others indicating that the unusual ontogeny of AP stems from the rare ability to attach arbitrary labels to isolated tones, and not from a lack of memory for the pitch level of ecologically valid music. After hearing previously unfamiliar melodies only twice in the laboratory, even 7-year-olds remembered the stimulus tunes explicitly and their key (or pitch level) implicitly. Our failure to uncover age differences in implicit memory for key did not appear to be a consequence of a lack of power because robust age-related improvements were evident for explicit memory for melody. The results add to our knowledge of human musicality and how it develops as a function of aging and music listening. They also confirm that much knowledge of music is acquired implicitly, such that appropriate tasks are necessary to determine the extent and limits of implicit musical knowledge [
<xref rid="pone.0187115.ref036" ref-type="bibr">36</xref>
]. Future research could use similar methods to vary the magnitude of the transposition, the delay between the exposure and recognition phases, or surface features other than key (e.g., tempo, timbre).</p>
</sec>
<sec sec-type="supplementary-material" id="sec008">
<title>Supporting information</title>
<supplementary-material content-type="local-data" id="pone.0187115.s001">
<label>S1 File</label>
<caption>
<title>Details about music training.</title>
<p>(DOCX)</p>
</caption>
<media xlink:href="pone.0187115.s001.docx">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0187115.s002">
<label>S2 File</label>
<caption>
<title>Dataset.</title>
<p>(CSV)</p>
</caption>
<media xlink:href="pone.0187115.s002.csv">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="pone.0187115.ref001">
<label>1</label>
<mixed-citation publication-type="book">
<name>
<surname>Deutsch</surname>
<given-names>D</given-names>
</name>
.
<chapter-title>Absolute pitch</chapter-title>
In:
<name>
<surname>Deutsch</surname>
<given-names>D</given-names>
</name>
, editor.
<source>The psychology of music</source>
.
<edition>3rd ed</edition>
<publisher-loc>Amsterdam</publisher-loc>
:
<publisher-name>Elsevier</publisher-name>
;
<year>2013</year>
p.
<fpage>141</fpage>
<lpage>82</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref002">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
.
<article-title>Good pitch memory is widespread</article-title>
.
<source>Psychol Sci</source>
.
<year>2003</year>
;
<volume>14</volume>
(
<issue>3</issue>
):
<fpage>262</fpage>
<lpage>6</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/1467-9280.03432">10.1111/1467-9280.03432</ext-link>
</comment>
<pub-id pub-id-type="pmid">12741751</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref003">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Levitin</surname>
<given-names>DJ</given-names>
</name>
.
<article-title>Absolute memory for musical pitch: evidence from the production of learned melodies</article-title>
.
<source>Percept Psychophys</source>
.
<year>1994</year>
;
<volume>56</volume>
(
<issue>4</issue>
):
<fpage>414</fpage>
<lpage>23</lpage>
.
<pub-id pub-id-type="pmid">7984397</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref004">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Frieler</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Fischinger</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Schlemmer</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Lothwesen</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Jakubowski</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Müllensiefen</surname>
<given-names>D</given-names>
</name>
.
<article-title>Absolute memory for pitch: a comparative replication of Levitin’s 1994 study in six European labs</article-title>
.
<source>Music Sci</source>
.
<year>2013</year>
;
<volume>17</volume>
(
<issue>3</issue>
):
<fpage>334</fpage>
<lpage>49</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref005">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
,
<name>
<surname>Stalinski</surname>
<given-names>SM</given-names>
</name>
,
<name>
<surname>Marks</surname>
<given-names>BM</given-names>
</name>
.
<article-title>Memory for surface features of unfamiliar melodies: independent effects of changes in pitch and tempo</article-title>
.
<source>Psychol Res</source>
.
<year>2014</year>
;
<volume>78</volume>
(
<issue>1</issue>
):
<fpage>84</fpage>
<lpage>95</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s00426-013-0483-y">10.1007/s00426-013-0483-y</ext-link>
</comment>
<pub-id pub-id-type="pmid">23385775</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref006">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
,
<name>
<surname>Habashi</surname>
<given-names>P</given-names>
</name>
.
<article-title>Remembering the melody and timbre, forgetting the key and tempo</article-title>
.
<source>Mem Cognit</source>
.
<year>2015</year>
;
<volume>43</volume>
(
<issue>7</issue>
):
<fpage>1021</fpage>
<lpage>31</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3758/s13421-015-0519-1">10.3758/s13421-015-0519-1</ext-link>
</comment>
<pub-id pub-id-type="pmid">25802029</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref007">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Saffran</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>Griepentrog</surname>
<given-names>GJ</given-names>
</name>
.
<article-title>Absolute pitch in infant auditory learning: evidence for developmental reorganization</article-title>
.
<source>Dev Psychol</source>
.
<year>2001</year>
;
<volume>37</volume>
(
<issue>1</issue>
):
<fpage>74</fpage>
<lpage>85</lpage>
.
<pub-id pub-id-type="pmid">11206435</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref008">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Saffran</surname>
<given-names>JR</given-names>
</name>
.
<article-title>Absolute pitch in infancy and adulthood: the role of tonal structure</article-title>
.
<source>Dev Science</source>
.
<year>2003</year>
;
<volume>6</volume>
(
<issue>1</issue>
):
<fpage>35</fpage>
<lpage>43</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref009">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Russo</surname>
<given-names>FA</given-names>
</name>
,
<name>
<surname>Windell</surname>
<given-names>DL</given-names>
</name>
,
<name>
<surname>Cuddy</surname>
<given-names>LL</given-names>
</name>
.
<article-title>Learning the “special note”: evidence for a critical Pperiod for absolute pitch acquisition</article-title>
.
<source>Music Percept</source>
.
<year>2003</year>
;
<volume>21</volume>
(
<issue>1</issue>
):
<fpage>119</fpage>
<lpage>27</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref010">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Chin</surname>
<given-names>CS</given-names>
</name>
.
<article-title>The development of absolute pitch: a theory concerning the roles of music training at an early developmental age and individual cognitive style</article-title>
.
<source>Psychol Music</source>
.
<year>2003</year>
;
<volume>31</volume>
(
<issue>2</issue>
):
<fpage>155</fpage>
<lpage>71</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref011">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
.
<article-title>Is there an Asian advantage for pitch memory?</article-title>
<source>Music Percept</source>
.
<year>2008</year>
;
<volume>25</volume>
(
<issue>3</issue>
):
<fpage>241</fpage>
<lpage>52</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref012">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
,
<name>
<surname>Nakata</surname>
<given-names>T</given-names>
</name>
.
<article-title>Cross-cultural perspectives on pitch memory</article-title>
.
<source>J Exp Child Psychol</source>
.
<year>2008</year>
;
<volume>100</volume>
(
<issue>1</issue>
):
<fpage>40</fpage>
<lpage>52</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.jecp.2008.01.007">10.1016/j.jecp.2008.01.007</ext-link>
</comment>
<pub-id pub-id-type="pmid">18325531</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref013">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Volkova</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
.
<article-title>Infants’ memory for musical performances</article-title>
.
<source>Dev Sci</source>
.
<year>2006</year>
;
<volume>9</volume>
(
<issue>6</issue>
):
<fpage>583</fpage>
<lpage>9</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/j.1467-7687.2006.00536.x">10.1111/j.1467-7687.2006.00536.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">17059455</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref014">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Plantinga</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Trainor</surname>
<given-names>LJ</given-names>
</name>
.
<article-title>Memory for melody: Infants use a relative pitch code</article-title>
.
<source>Cognition</source>
.
<year>2005</year>
;
<volume>98</volume>
(
<issue>1</issue>
):
<fpage>1</fpage>
<lpage>11</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.cognition.2004.09.008">10.1016/j.cognition.2004.09.008</ext-link>
</comment>
<pub-id pub-id-type="pmid">16297673</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref015">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stalinski</surname>
<given-names>SM</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
.
<article-title>Shifting perceptions: Developmental changes in judgments of melodic similarity</article-title>
.
<source>Dev Psychol</source>
.
<year>2010</year>
;
<volume>46</volume>
(
<issue>6</issue>
):
<fpage>1799</fpage>
<lpage>803</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1037/a0020658">10.1037/a0020658</ext-link>
</comment>
<pub-id pub-id-type="pmid">20822211</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref016">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Weiss</surname>
<given-names>MW</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
,
<name>
<surname>Dawber</surname>
<given-names>EJ</given-names>
</name>
.
<article-title>Enhanced processing of vocal melodies in childhood</article-title>
.
<source>Dev Psychol</source>
.
<year>2015</year>
;
<volume>51</volume>
(
<issue>3</issue>
):
<fpage>370</fpage>
<lpage>7</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1037/a0038784">10.1037/a0038784</ext-link>
</comment>
<pub-id pub-id-type="pmid">25706592</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref017">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Trainor</surname>
<given-names>LJ</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
.
<article-title>Key membership and implied harmony in Western tonal music: developmental perspectives</article-title>
.
<source>Percept Psychophys</source>
.
<year>1994</year>
;
<volume>56</volume>
(
<issue>2</issue>
):
<fpage>125</fpage>
<lpage>132</lpage>
.
<pub-id pub-id-type="pmid">7971113</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref018">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Newcombe</surname>
<given-names>NS</given-names>
</name>
,
<name>
<surname>Lloyd</surname>
<given-names>ME</given-names>
</name>
,
<name>
<surname>Ratliff</surname>
<given-names>KR</given-names>
</name>
.
<article-title>Development of episodic and autobiographical memory: a cognitive neuroscience perspective</article-title>
.
<source>Adv Child Dev Behav</source>
.
<year>2007</year>
;
<volume>35</volume>
:
<fpage>37</fpage>
<lpage>85</lpage>
.
<pub-id pub-id-type="pmid">17682323</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref019">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Cowan</surname>
<given-names>N</given-names>
</name>
.
<article-title>Working memory underpins cognitive development, learning, and education</article-title>
.
<source>Educ Psychol Rev</source>
.
<year>2014</year>
;
<volume>26</volume>
(
<issue>2</issue>
):
<fpage>197</fpage>
<lpage>223</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/s10648-013-9246-y">10.1007/s10648-013-9246-y</ext-link>
</comment>
<pub-id pub-id-type="pmid">25346585</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref020">
<label>20</label>
<mixed-citation publication-type="book">
<name>
<surname>Pressley</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Borkowski</surname>
<given-names>JG</given-names>
</name>
,
<name>
<surname>Johnson</surname>
<given-names>CJ</given-names>
</name>
.
<chapter-title>The development of good strategy use: imagery and related mnemonic strategies</chapter-title>
In:
<name>
<surname>McDaniel</surname>
<given-names>MA</given-names>
</name>
,
<name>
<surname>Pressley</surname>
<given-names>M</given-names>
</name>
, editors.
<source>Imagery and related mnemonic processes: theories, individual differences, and applications</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Springer-Verlag</publisher-name>
;
<year>1987</year>
p.
<fpage>274</fpage>
<lpage>97</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref021">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Best</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>Miller</surname>
<given-names>PH</given-names>
</name>
.
<article-title>A developmental perspective on executive function</article-title>
.
<source>Child Dev</source>
.
<year>2010</year>
;
<volume>81</volume>
(
<issue>6</issue>
):
<fpage>1641</fpage>
<lpage>60</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/j.1467-8624.2010.01499.x">10.1111/j.1467-8624.2010.01499.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">21077853</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref022">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Zelazo</surname>
<given-names>PD</given-names>
</name>
,
<name>
<surname>Carlson</surname>
<given-names>SM</given-names>
</name>
.
<article-title>Hot and cool executive function in childhood and adolescence: development and plasticity</article-title>
.
<source>Child Dev Perspect</source>
.
<year>2012</year>
;
<volume>6</volume>
(
<issue>4</issue>
):
<fpage>354</fpage>
<lpage>60</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref023">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Cowan</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Naveh-Benjamin</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Kilb</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Saults</surname>
<given-names>JS</given-names>
</name>
.
<article-title>Life-span development of visual working memory: when is feature binding difficult?</article-title>
<source>Dev Psychol</source>
.
<year>2006</year>
;
<volume>42</volume>
(
<issue>6</issue>
):
<fpage>1089</fpage>
<lpage>102</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1037/0012-1649.42.6.1089">10.1037/0012-1649.42.6.1089</ext-link>
</comment>
<pub-id pub-id-type="pmid">17087544</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref024">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kail</surname>
<given-names>RV</given-names>
</name>
,
<name>
<surname>Ferrer</surname>
<given-names>E</given-names>
</name>
.
<article-title>Processing speed in childhood and adolescence: longitudinal models for examining developmental change</article-title>
.
<source>Child Dev</source>
.
<year>2007</year>
;
<volume>78</volume>
(
<issue>6</issue>
):
<fpage>1760</fpage>
<lpage>70</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/j.1467-8624.2007.01088.x">10.1111/j.1467-8624.2007.01088.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">17988319</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref025">
<label>25</label>
<mixed-citation publication-type="book">
<name>
<surname>Thelan</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Smith</surname>
<given-names>LE</given-names>
</name>
.
<chapter-title>Dynamic systems theories</chapter-title>
In:
<name>
<surname>Damon</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Lerner</surname>
<given-names>RM</given-names>
</name>
, editors.
<source>Handbook of child psychology: Vol. 1. Theoretical models of human development</source>
.
<edition>6th ed</edition>
<publisher-loc>Hoboken NJ</publisher-loc>
:
<publisher-name>Wiley</publisher-name>
;
<year>2006</year>
p.
<fpage>258</fpage>
<lpage>312</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref026">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hunter</surname>
<given-names>PG</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
,
<name>
<surname>Stalinski</surname>
<given-names>SM</given-names>
</name>
.
<article-title>Liking and identifying emotionally expressive music: age and gender differences</article-title>
.
<source>J Exp Child Psychol</source>
.
<year>2011</year>
;
<volume>110</volume>
(
<issue>1</issue>
):
<fpage>80</fpage>
<lpage>93</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.jecp.2011.04.001">10.1016/j.jecp.2011.04.001</ext-link>
</comment>
<pub-id pub-id-type="pmid">21530980</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref027">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Weiss</surname>
<given-names>MW</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
.
<article-title>Something in the way she sings: enhanced memory for vocal melodies</article-title>
.
<source>Psychol Sci</source>
.
<year>2012</year>
;
<volume>23</volume>
(
<issue>10</issue>
):
<fpage>1074</fpage>
<lpage>8</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/0956797612442552">10.1177/0956797612442552</ext-link>
</comment>
<pub-id pub-id-type="pmid">22894936</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref028">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Weiss</surname>
<given-names>MW</given-names>
</name>
,
<name>
<surname>Vanzella</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
.
<article-title>Pianists exhibit enhanced memory for vocal melodies but not piano melodies</article-title>
.
<source>Q J Exp Psychol</source>
.
<year>2015</year>
;
<volume>68</volume>
(
<issue>5</issue>
):
<fpage>866</fpage>
<lpage>77</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref029">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Bartlett</surname>
<given-names>JC</given-names>
</name>
,
<name>
<surname>Dowling</surname>
<given-names>WJ</given-names>
</name>
.
<article-title>Aging and experience in the recognition of musical transpositions</article-title>
.
<source>Psychol Aging</source>
.
<year>1995</year>
;
<volume>10</volume>
(
<issue>3</issue>
):
<fpage>325</fpage>
<lpage>42</lpage>
.
<pub-id pub-id-type="pmid">8527054</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref030">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dowling</surname>
<given-names>WJ</given-names>
</name>
,
<name>
<surname>Bartlett</surname>
<given-names>JC</given-names>
</name>
,
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Andrews</surname>
<given-names>MW</given-names>
</name>
.
<article-title>Melody recognition at fast and slow tempos: effects of age, experience, and familiarity</article-title>
.
<source>Percept Psychophys</source>
.
<year>2008</year>
;
<volume>70</volume>
(
<issue>3</issue>
):
<fpage>496</fpage>
<lpage>502</lpage>
.
<pub-id pub-id-type="pmid">18459260</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref031">
<label>31</label>
<mixed-citation publication-type="other">Slavin S. PsyScript 3. Version 3.2.1 [software]. Lancaster University. 2014 [cited May 9, 2017].
<ext-link ext-link-type="uri" xlink:href="http://www.lancaster.ac.uk/psychology/research/research-software/psyscript3/">http://www.lancaster.ac.uk/psychology/research/research-software/psyscript3/</ext-link>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref032">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Corrigall</surname>
<given-names>KA</given-names>
</name>
,
<name>
<surname>Trainor</surname>
<given-names>LJ</given-names>
</name>
.
<article-title>Musical enculturation in preschool children: acquisition of key and harmonic knowledge</article-title>
.
<source>Music Percept</source>
.
<year>2010</year>
;
<volume>28</volume>
(
<issue>2</issue>
):
<fpage>195</fpage>
<lpage>200</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref033">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hannon</surname>
<given-names>EE</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
.
<article-title>Metrical categories in infancy and adulthood</article-title>
.
<source>Psychol Sci</source>
.
<year>2005</year>
;
<volume>16</volume>
(
<issue>1</issue>
):
<fpage>48</fpage>
<lpage>55</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/j.0956-7976.2005.00779.x">10.1111/j.0956-7976.2005.00779.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">15660851</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref034">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hannon</surname>
<given-names>EE</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
.
<article-title>Tuning in to musical rhythms: infants learn more readily than adults</article-title>
.
<source>Proc Natl Acad Sci U S A</source>
.
<year>2005</year>
;
<volume>102</volume>
(
<issue>35</issue>
):
<fpage>12639</fpage>
<lpage>43</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1073/pnas.0504254102">10.1073/pnas.0504254102</ext-link>
</comment>
<pub-id pub-id-type="pmid">16105946</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref035">
<label>35</label>
<mixed-citation publication-type="book">
<name>
<surname>Schneider</surname>
<given-names>W</given-names>
</name>
.
<source>Memory development from early childhood through emerging adulthood</source>
.
<edition>1st ed</edition>
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Springer</publisher-name>
;
<year>2015</year>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref036">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Poulin-Charronnat</surname>
<given-names>B</given-names>
</name>
.
<article-title>Are we “experienced listeners”? a review of the musical capacities that do not depend on formal musical training</article-title>
.
<source>Cognition</source>
.
<year>2006</year>
;
<volume>100</volume>
(
<issue>1</issue>
):
<fpage>100</fpage>
<lpage>30</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.cognition.2005.11.007">10.1016/j.cognition.2005.11.007</ext-link>
</comment>
<pub-id pub-id-type="pmid">16412412</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref037">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
,
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Poulin-Charronnat</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Garnier</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Stevens</surname>
<given-names>C</given-names>
</name>
.
<article-title>Children’s implicit knowledge of harmony in Western music</article-title>
.
<source>Dev Sci</source>
.
<year>2005</year>
;
<volume>8</volume>
(
<issue>6</issue>
):
<fpage>551</fpage>
<lpage>66</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/j.1467-7687.2005.00447.x">10.1111/j.1467-7687.2005.00447.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">16246247</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref038">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Tillmann</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Poulin</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>D'Adamo</surname>
<given-names>DA</given-names>
</name>
,
<name>
<surname>Madurell</surname>
<given-names>F</given-names>
</name>
.
<article-title>The effect of harmonic context on phoneme monitoring in vocal music</article-title>
.
<source>Cognition</source>
.
<year>2001</year>
;
<volume>81</volume>
(
<issue>1</issue>
):
<fpage>B11</fpage>
<lpage>20</lpage>
.
<pub-id pub-id-type="pmid">11525485</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref039">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Tillmann</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Gosselin</surname>
<given-names>N</given-names>
</name>
.
<article-title>Harmonic priming in an amusic patient: the power of implicit tasks</article-title>
.
<source>Cogn Neuropsychol</source>
.
<year>2007</year>
;
<volume>24</volume>
(
<issue>6</issue>
):
<fpage>603</fpage>
<lpage>22</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1080/02643290701609527">10.1080/02643290701609527</ext-link>
</comment>
<pub-id pub-id-type="pmid">18416511</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref040">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hauser</surname>
<given-names>MD</given-names>
</name>
,
<name>
<surname>McDermott</surname>
<given-names>J</given-names>
</name>
.
<article-title>The evolution of the music faculty: a comparative perspective</article-title>
.
<source>Nat Neurosci</source>
.
<year>2003</year>
;
<volume>6</volume>
(
<issue>7</issue>
):
<fpage>663</fpage>
<lpage>8</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1038/nn1080">10.1038/nn1080</ext-link>
</comment>
<pub-id pub-id-type="pmid">12830156</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref041">
<label>41</label>
<mixed-citation publication-type="book">
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
,
<name>
<surname>Trainor</surname>
<given-names>L</given-names>
</name>
.
<chapter-title>Singing to infants: lullabies and play songs</chapter-title>
In:
<name>
<surname>Rovee-Collier</surname>
<given-names>CK</given-names>
</name>
,
<name>
<surname>Lipsitt</surname>
<given-names>LP</given-names>
</name>
,
<name>
<surname>Hayne</surname>
<given-names>H</given-names>
</name>
, editors.
<source>Advances in infancy research</source>
.
<publisher-loc>London</publisher-loc>
:
<publisher-name>Ablex</publisher-name>
;
<year>1998</year>
; p.
<fpage>43</fpage>
<lpage>78</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref042">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bergeson</surname>
<given-names>TR</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
.
<article-title>Absolute pitch and tempo in mothers' songs to infants</article-title>
.
<source>Psychol Sci</source>
.
<year>2002</year>
;
<volume>13</volume>
(
<issue>1</issue>
):
<fpage>72</fpage>
<lpage>5</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1111/1467-9280.00413">10.1111/1467-9280.00413</ext-link>
</comment>
<pub-id pub-id-type="pmid">11892783</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref043">
<label>43</label>
<mixed-citation publication-type="book">
<name>
<surname>McDermott</surname>
<given-names>JH</given-names>
</name>
.
<chapter-title>Auditory preferences and aesthetics: Music, voices, and everyday sounds</chapter-title>
In:
<name>
<surname>Dolan</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Sharot</surname>
<given-names>T</given-names>
</name>
, editors.
<source>Neuroscience of preference and choice: Cognitive and Neural Mechanisms</source>
.
<publisher-loc>London</publisher-loc>
:
<publisher-name>Academic Press</publisher-name>
;
<year>2012</year>
p.
<fpage>227</fpage>
<lpage>56</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref044">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bruckert</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Bestelmeyer</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Latinus</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Rouger</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Charest</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Rousselet</surname>
<given-names>GA</given-names>
</name>
,
<etal>et al</etal>
<article-title>Vocal attractiveness increases by averaging</article-title>
.
<source>Curr Biol</source>
.
<year>2010</year>
;
<volume>20</volume>
(
<issue>2</issue>
):
<fpage>116</fpage>
<lpage>20</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.cub.2009.11.034">10.1016/j.cub.2009.11.034</ext-link>
</comment>
<pub-id pub-id-type="pmid">20129047</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref045">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Juslin</surname>
<given-names>PN</given-names>
</name>
,
<name>
<surname>Laukka</surname>
<given-names>P</given-names>
</name>
.
<article-title>Communication of emotions in vocal expression and music performance: different channels, same code?</article-title>
<source>Psychol Bull</source>
.
<year>2003</year>
;
<volume>129</volume>
(
<issue>5</issue>
):
<fpage>770</fpage>
<lpage>814</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1037/0033-2909.129.5.770">10.1037/0033-2909.129.5.770</ext-link>
</comment>
<pub-id pub-id-type="pmid">12956543</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0187115.ref046">
<label>46</label>
<mixed-citation publication-type="book">
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Bartlett</surname>
<given-names>JC</given-names>
</name>
.
<chapter-title>Memory for melodies</chapter-title>
In:
<name>
<surname>Jones</surname>
<given-names>MR</given-names>
</name>
,
<name>
<surname>Fay</surname>
<given-names>RR</given-names>
</name>
,
<name>
<surname>Popper</surname>
<given-names>AN</given-names>
</name>
, editors.
<source>Music perception</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Springer</publisher-name>
;
<year>2010</year>
p.
<fpage>233</fpage>
<lpage>258</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref047">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>McAuley</surname>
<given-names>JD</given-names>
</name>
,
<name>
<surname>Stevens</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Humphreys</surname>
<given-names>MS</given-names>
</name>
.
<article-title>Play it again: did this melody occur more frequently or was it heard more recently? the role of stimulus familiarity in episodic recognition of music</article-title>
.
<source>Acta Psychol</source>
.
<year>2004</year>
;
<volume>116</volume>
(
<issue>1</issue>
):
<fpage>93</fpage>
<lpage>108</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref048">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Müllensiefen</surname>
<given-names>D</given-names>
</name>
.
<article-title>Effects of timbre and tempo change on memory for music</article-title>
.
<source>Q J Exp Psychol</source>
.
<year>2008</year>
;
<volume>61</volume>
(
<issue>9</issue>
):
<fpage>1371</fpage>
<lpage>84</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref049">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Weiss</surname>
<given-names>MW</given-names>
</name>
,
<name>
<surname>Trehub</surname>
<given-names>SE</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
.
<article-title>Generality of the memory advantage for vocal melodies</article-title>
.
<source>Music Percept</source>
.
<year>2017</year>
;
<volume>34</volume>
(
<issue>3</issue>
):
<fpage>313</fpage>
<lpage>18</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref050">
<label>50</label>
<mixed-citation publication-type="journal">
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Bartlett</surname>
<given-names>JC</given-names>
</name>
,
<name>
<surname>Dowling</surname>
<given-names>WJ</given-names>
</name>
.
<article-title>Perception of mode, rhythm, and contour in unfamiliar melodies: effects of age and experience</article-title>
.
<source>Music Percept</source>
.
<year>1998</year>
;
<volume>15</volume>
(
<issue>4</issue>
):
<fpage>335</fpage>
<lpage>55</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref051">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schellenberg</surname>
<given-names>EF</given-names>
</name>
,
<name>
<surname>Moreno</surname>
<given-names>S</given-names>
</name>
.
<article-title>Music lessons, pitch processing, and
<italic>g</italic>
</article-title>
.
<source>Psychol Music</source>
.
<year>2010</year>
;
<volume>38</volume>
(
<issue>2</issue>
):
<fpage>209</fpage>
<lpage>221</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0187115.ref052">
<label>52</label>
<mixed-citation publication-type="journal">
<name>
<surname>Andrade</surname>
<given-names>PE</given-names>
</name>
,
<name>
<surname>Vanzella</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Andrade</surname>
<given-names>OV</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
.
<article-title>Associating emotions with Wagner’s music: a developmental perspective</article-title>
. ‎
<source>Psychol Music. Advance online publication</source>
.
<year>2016</year>
<month>11</month>
<day>24</day>
;
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/0305735616678056">10.1177/0305735616678056</ext-link>
</comment>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/MusiqueCeltiqueV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000F54 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000F54 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    MusiqueCeltiqueV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:5659795
   |texte=   Memory for melody and key in childhood
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:29077726" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a MusiqueCeltiqueV1 

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Sat May 29 22:04:25 2021. Site generation: Sat May 29 22:08:31 2021