Serveur d'exploration sur la musique en Sarre

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

The speed of our mental soundtracks: Tracking the tempo of involuntary musical imagery in everyday life

Identifieur interne : 000047 ( Pmc/Corpus ); précédent : 000046; suivant : 000048

The speed of our mental soundtracks: Tracking the tempo of involuntary musical imagery in everyday life

Auteurs : Kelly Jakubowski ; Nicolas Farrugia ; Andrea R. Halpern ; Sathish K. Sankarpandi ; Lauren Stewart

Source :

RBID : PMC:4624826

Abstract

The study of spontaneous and everyday cognitions is an area of rapidly growing interest. One of the most ubiquitous forms of spontaneous cognition is involuntary musical imagery (INMI), the involuntarily retrieved and repetitive mental replay of music. The present study introduced a novel method for capturing temporal features of INMI within a naturalistic setting. This method allowed for the investigation of two questions of interest to INMI researchers in a more objective way than previously possible, concerning (1) the precision of memory representations within INMI and (2) the interactions between INMI and concurrent affective state. Over the course of 4 days, INMI tempo was measured by asking participants to tap to the beat of their INMI with a wrist-worn accelerometer. Participants documented additional details regarding their INMI in a diary. Overall, the tempo of music within INMI was recalled from long-term memory in a highly veridical form, although with a regression to the mean for recalled tempo that parallels previous findings on voluntary musical imagery. A significant positive relationship was found between INMI tempo and subjective arousal, suggesting that INMI interacts with concurrent mood in a similar manner to perceived music. The results suggest several parallels between INMI and voluntary imagery, music perceptual processes, and other types of involuntary memories.


Url:
DOI: 10.3758/s13421-015-0531-5
PubMed: 26122757
PubMed Central: 4624826

Links to Exploration step

PMC:4624826

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">The speed of our mental soundtracks: Tracking the tempo of involuntary musical imagery in everyday life</title>
<author>
<name sortKey="Jakubowski, Kelly" sort="Jakubowski, Kelly" uniqKey="Jakubowski K" first="Kelly" last="Jakubowski">Kelly Jakubowski</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, Goldsmiths, University of London, New Cross Road, New Cross, London, SE14 6NW UK</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Farrugia, Nicolas" sort="Farrugia, Nicolas" uniqKey="Farrugia N" first="Nicolas" last="Farrugia">Nicolas Farrugia</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, Goldsmiths, University of London, New Cross Road, New Cross, London, SE14 6NW UK</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Halpern, Andrea R" sort="Halpern, Andrea R" uniqKey="Halpern A" first="Andrea R." last="Halpern">Andrea R. Halpern</name>
<affiliation>
<nlm:aff id="Aff2">Department of Psychology, Bucknell University, Lewisburg, PA USA</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Sankarpandi, Sathish K" sort="Sankarpandi, Sathish K" uniqKey="Sankarpandi S" first="Sathish K." last="Sankarpandi">Sathish K. Sankarpandi</name>
<affiliation>
<nlm:aff id="Aff3">School of Nursing and Health Sciences, University of Dundee, Dundee, UK</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Stewart, Lauren" sort="Stewart, Lauren" uniqKey="Stewart L" first="Lauren" last="Stewart">Lauren Stewart</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, Goldsmiths, University of London, New Cross Road, New Cross, London, SE14 6NW UK</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26122757</idno>
<idno type="pmc">4624826</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4624826</idno>
<idno type="RBID">PMC:4624826</idno>
<idno type="doi">10.3758/s13421-015-0531-5</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000047</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000047</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">The speed of our mental soundtracks: Tracking the tempo of involuntary musical imagery in everyday life</title>
<author>
<name sortKey="Jakubowski, Kelly" sort="Jakubowski, Kelly" uniqKey="Jakubowski K" first="Kelly" last="Jakubowski">Kelly Jakubowski</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, Goldsmiths, University of London, New Cross Road, New Cross, London, SE14 6NW UK</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Farrugia, Nicolas" sort="Farrugia, Nicolas" uniqKey="Farrugia N" first="Nicolas" last="Farrugia">Nicolas Farrugia</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, Goldsmiths, University of London, New Cross Road, New Cross, London, SE14 6NW UK</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Halpern, Andrea R" sort="Halpern, Andrea R" uniqKey="Halpern A" first="Andrea R." last="Halpern">Andrea R. Halpern</name>
<affiliation>
<nlm:aff id="Aff2">Department of Psychology, Bucknell University, Lewisburg, PA USA</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Sankarpandi, Sathish K" sort="Sankarpandi, Sathish K" uniqKey="Sankarpandi S" first="Sathish K." last="Sankarpandi">Sathish K. Sankarpandi</name>
<affiliation>
<nlm:aff id="Aff3">School of Nursing and Health Sciences, University of Dundee, Dundee, UK</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Stewart, Lauren" sort="Stewart, Lauren" uniqKey="Stewart L" first="Lauren" last="Stewart">Lauren Stewart</name>
<affiliation>
<nlm:aff id="Aff1">Department of Psychology, Goldsmiths, University of London, New Cross Road, New Cross, London, SE14 6NW UK</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Memory & Cognition</title>
<idno type="ISSN">0090-502X</idno>
<idno type="eISSN">1532-5946</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The study of spontaneous and everyday cognitions is an area of rapidly growing interest. One of the most ubiquitous forms of spontaneous cognition is involuntary musical imagery (INMI), the involuntarily retrieved and repetitive mental replay of music. The present study introduced a novel method for capturing temporal features of INMI within a naturalistic setting. This method allowed for the investigation of two questions of interest to INMI researchers in a more objective way than previously possible, concerning (1) the precision of memory representations within INMI and (2) the interactions between INMI and concurrent affective state. Over the course of 4 days, INMI tempo was measured by asking participants to tap to the beat of their INMI with a wrist-worn accelerometer. Participants documented additional details regarding their INMI in a diary. Overall, the tempo of music within INMI was recalled from long-term memory in a highly veridical form, although with a regression to the mean for recalled tempo that parallels previous findings on voluntary musical imagery. A significant positive relationship was found between INMI tempo and subjective arousal, suggesting that INMI interacts with concurrent mood in a similar manner to perceived music. The results suggest several parallels between INMI and voluntary imagery, music perceptual processes, and other types of involuntary memories.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Bailes, Fa" uniqKey="Bailes F">FA Bailes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beaman, Cp" uniqKey="Beaman C">CP Beaman</name>
</author>
<author>
<name sortKey="Williams, Ti" uniqKey="Williams T">TI Williams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beaman, Cp" uniqKey="Beaman C">CP Beaman</name>
</author>
<author>
<name sortKey="Williams, Ti" uniqKey="Williams T">TI Williams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beaty, Re" uniqKey="Beaty R">RE Beaty</name>
</author>
<author>
<name sortKey="Burgin, Cj" uniqKey="Burgin C">CJ Burgin</name>
</author>
<author>
<name sortKey="Nusbaum, Ec" uniqKey="Nusbaum E">EC Nusbaum</name>
</author>
<author>
<name sortKey="Kwapil, Tr" uniqKey="Kwapil T">TR Kwapil</name>
</author>
<author>
<name sortKey="Hodges, Da" uniqKey="Hodges D">DA Hodges</name>
</author>
<author>
<name sortKey="Silvia, Pj" uniqKey="Silvia P">PJ Silvia</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benoit, Ce" uniqKey="Benoit C">CE Benoit</name>
</author>
<author>
<name sortKey="Dalla Bella, S" uniqKey="Dalla Bella S">S Dalla Bella</name>
</author>
<author>
<name sortKey="Farrugia, N" uniqKey="Farrugia N">N Farrugia</name>
</author>
<author>
<name sortKey="Obrig, H" uniqKey="Obrig H">H Obrig</name>
</author>
<author>
<name sortKey="Mainka, S" uniqKey="Mainka S">S Mainka</name>
</author>
<author>
<name sortKey="Kotz, Sa" uniqKey="Kotz S">SA Kotz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Berntsen, D" uniqKey="Berntsen D">D Berntsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Berntsen, D" uniqKey="Berntsen D">D Berntsen</name>
</author>
<author>
<name sortKey="Hall, Nm" uniqKey="Hall N">NM Hall</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Berntsen, D" uniqKey="Berntsen D">D Berntsen</name>
</author>
<author>
<name sortKey="Staugaard, Sr" uniqKey="Staugaard S">SR Staugaard</name>
</author>
<author>
<name sortKey="S Rensen, Lm" uniqKey="S Rensen L">LM Sørensen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brown, S" uniqKey="Brown S">S Brown</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Byron, Tp" uniqKey="Byron T">TP Byron</name>
</author>
<author>
<name sortKey="Fowles, Lc" uniqKey="Fowles L">LC Fowles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Edworthy, J" uniqKey="Edworthy J">J Edworthy</name>
</author>
<author>
<name sortKey="Waring, H" uniqKey="Waring H">H Waring</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ericsson, Ka" uniqKey="Ericsson K">KA Ericsson</name>
</author>
<author>
<name sortKey="Simon, Ha" uniqKey="Simon H">HA Simon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Floridou, G" uniqKey="Floridou G">G Floridou</name>
</author>
<author>
<name sortKey="Mullensiefen, D" uniqKey="Mullensiefen D">D Müllensiefen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Floridou, G" uniqKey="Floridou G">G Floridou</name>
</author>
<author>
<name sortKey="Williamson, Vj" uniqKey="Williamson V">VJ Williamson</name>
</author>
<author>
<name sortKey="Mullensiefen, D" uniqKey="Mullensiefen D">D Müllensiefen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frieler, K" uniqKey="Frieler K">K Frieler</name>
</author>
<author>
<name sortKey="Fischinger, T" uniqKey="Fischinger T">T Fischinger</name>
</author>
<author>
<name sortKey="Schlemmer, K" uniqKey="Schlemmer K">K Schlemmer</name>
</author>
<author>
<name sortKey="Lothwesen, K" uniqKey="Lothwesen K">K Lothwesen</name>
</author>
<author>
<name sortKey="Jakubowski, K" uniqKey="Jakubowski K">K Jakubowski</name>
</author>
<author>
<name sortKey="Mullensiefen, D" uniqKey="Mullensiefen D">D Müllensiefen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gagnon, L" uniqKey="Gagnon L">L Gagnon</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grahn, Ja" uniqKey="Grahn J">JA Grahn</name>
</author>
<author>
<name sortKey="Brett, M" uniqKey="Brett M">M Brett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grahn, Ja" uniqKey="Grahn J">JA Grahn</name>
</author>
<author>
<name sortKey="Rowe, Jb" uniqKey="Rowe J">JB Rowe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Griffiths, Td" uniqKey="Griffiths T">TD Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
<author>
<name sortKey="Bartlett, Jc" uniqKey="Bartlett J">JC Bartlett</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Husain, G" uniqKey="Husain G">G Husain</name>
</author>
<author>
<name sortKey="Thompson, Wf" uniqKey="Thompson W">WF Thompson</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hyman, Ie" uniqKey="Hyman I">IE Hyman</name>
</author>
<author>
<name sortKey="Burland, Nk" uniqKey="Burland N">NK Burland</name>
</author>
<author>
<name sortKey="Duskin, Hm" uniqKey="Duskin H">HM Duskin</name>
</author>
<author>
<name sortKey="Cook, Mc" uniqKey="Cook M">MC Cook</name>
</author>
<author>
<name sortKey="Roy, Cm" uniqKey="Roy C">CM Roy</name>
</author>
<author>
<name sortKey="Mcgrath, Jc" uniqKey="Mcgrath J">JC McGrath</name>
</author>
<author>
<name sortKey="Roundhill, Rf" uniqKey="Roundhill R">RF Roundhill</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Juslin, Pn" uniqKey="Juslin P">PN Juslin</name>
</author>
<author>
<name sortKey="Laukka, P" uniqKey="Laukka P">P Laukka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Killingsworth, Ma" uniqKey="Killingsworth M">MA Killingsworth</name>
</author>
<author>
<name sortKey="Gilbert, Dt" uniqKey="Gilbert D">DT Gilbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kvavilashvili, L" uniqKey="Kvavilashvili L">L Kvavilashvili</name>
</author>
<author>
<name sortKey="Mandler, G" uniqKey="Mandler G">G Mandler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levitin, Dj" uniqKey="Levitin D">DJ Levitin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levitin, Dj" uniqKey="Levitin D">DJ Levitin</name>
</author>
<author>
<name sortKey="Cook, Pr" uniqKey="Cook P">PR Cook</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liikkanen, La" uniqKey="Liikkanen L">LA Liikkanen</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lucas, Bl" uniqKey="Lucas B">BL Lucas</name>
</author>
<author>
<name sortKey="Schubert, E" uniqKey="Schubert E">E Schubert</name>
</author>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mace, Jh" uniqKey="Mace J">JH Mace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Margulis, Eh" uniqKey="Margulis E">EH Margulis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcauley, Jd" uniqKey="Mcauley J">JD McAuley</name>
</author>
<author>
<name sortKey="Jones, Mr" uniqKey="Jones M">MR Jones</name>
</author>
<author>
<name sortKey="Holub, S" uniqKey="Holub S">S Holub</name>
</author>
<author>
<name sortKey="Johnston, Hm" uniqKey="Johnston H">HM Johnston</name>
</author>
<author>
<name sortKey="Miller, Ns" uniqKey="Miller N">NS Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcvay, Jc" uniqKey="Mcvay J">JC McVay</name>
</author>
<author>
<name sortKey="Kane, Mj" uniqKey="Kane M">MJ Kane</name>
</author>
<author>
<name sortKey="Kwapil, Tr" uniqKey="Kwapil T">TR Kwapil</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mullensiefen, D" uniqKey="Mullensiefen D">D Müllensiefen</name>
</author>
<author>
<name sortKey="Fry, J" uniqKey="Fry J">J Fry</name>
</author>
<author>
<name sortKey="Jones, R" uniqKey="Jones R">R Jones</name>
</author>
<author>
<name sortKey="Jilka, S" uniqKey="Jilka S">S Jilka</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L Stewart</name>
</author>
<author>
<name sortKey="Williamson, V" uniqKey="Williamson V">V Williamson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="North, Ac" uniqKey="North A">AC North</name>
</author>
<author>
<name sortKey="Hargreaves, Dj" uniqKey="Hargreaves D">DJ Hargreaves</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rowlands, Av" uniqKey="Rowlands A">AV Rowlands</name>
</author>
<author>
<name sortKey="Schuna, Jm" uniqKey="Schuna J">JM Schuna</name>
</author>
<author>
<name sortKey="Stiles, Vh" uniqKey="Stiles V">VH Stiles</name>
</author>
<author>
<name sortKey="Tudor Locke, C" uniqKey="Tudor Locke C">C Tudor-Locke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saarikallio, S" uniqKey="Saarikallio S">S Saarikallio</name>
</author>
<author>
<name sortKey="Erkkil, J" uniqKey="Erkkil J">J Erkkilä</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Von Scheve, C" uniqKey="Von Scheve C">C von Scheve</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schlagman, S" uniqKey="Schlagman S">S Schlagman</name>
</author>
<author>
<name sortKey="Kvavilashvili, L" uniqKey="Kvavilashvili L">L Kvavilashvili</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sloboda, Ja" uniqKey="Sloboda J">JA Sloboda</name>
</author>
<author>
<name sortKey="O Eill, Sa" uniqKey="O Eill S">SA O’Neill</name>
</author>
<author>
<name sortKey="Ivaldi, A" uniqKey="Ivaldi A">A Ivaldi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smallwood, J" uniqKey="Smallwood J">J Smallwood</name>
</author>
<author>
<name sortKey="Schooler, Jw" uniqKey="Schooler J">JW Schooler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smallwood, J" uniqKey="Smallwood J">J Smallwood</name>
</author>
<author>
<name sortKey="Schooler, Jw" uniqKey="Schooler J">JW Schooler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sowi Ski, J" uniqKey="Sowi Ski J">J Sowiński</name>
</author>
<author>
<name sortKey="Dalla Bella, S" uniqKey="Dalla Bella S">S Dalla Bella</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tarrant, M" uniqKey="Tarrant M">M Tarrant</name>
</author>
<author>
<name sortKey="North, Ac" uniqKey="North A">AC North</name>
</author>
<author>
<name sortKey="Hargreaves, Dj" uniqKey="Hargreaves D">DJ Hargreaves</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wammes, M" uniqKey="Wammes M">M Wammes</name>
</author>
<author>
<name sortKey="Baruss, I" uniqKey="Baruss I">I Barušs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Webster, Gd" uniqKey="Webster G">GD Webster</name>
</author>
<author>
<name sortKey="Weir, Cd" uniqKey="Weir C">CD Weir</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Williamson, Vj" uniqKey="Williamson V">VJ Williamson</name>
</author>
<author>
<name sortKey="Jilka, Sr" uniqKey="Jilka S">SR Jilka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Williamson, Vj" uniqKey="Williamson V">VJ Williamson</name>
</author>
<author>
<name sortKey="Jilka, Sr" uniqKey="Jilka S">SR Jilka</name>
</author>
<author>
<name sortKey="Fry, J" uniqKey="Fry J">J Fry</name>
</author>
<author>
<name sortKey="Finkel, S" uniqKey="Finkel S">S Finkel</name>
</author>
<author>
<name sortKey="Mullensiefen, D" uniqKey="Mullensiefen D">D Müllensiefen</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L Stewart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Williamson, V" uniqKey="Williamson V">V Williamson</name>
</author>
<author>
<name sortKey="Liikkanen, L" uniqKey="Liikkanen L">L Liikkanen</name>
</author>
<author>
<name sortKey="Jakubowski, K" uniqKey="Jakubowski K">K Jakubowski</name>
</author>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L Stewart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Williamson, Vj" uniqKey="Williamson V">VJ Williamson</name>
</author>
<author>
<name sortKey="Mullensiefen, D" uniqKey="Mullensiefen D">D Müllensiefen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
<author>
<name sortKey="Halpern, Ar" uniqKey="Halpern A">AR Halpern</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zelaznik, Hn" uniqKey="Zelaznik H">HN Zelaznik</name>
</author>
<author>
<name sortKey="Rosenbaum, Da" uniqKey="Rosenbaum D">DA Rosenbaum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, S" uniqKey="Zhang S">S Zhang</name>
</author>
<author>
<name sortKey="Rowlands, Av" uniqKey="Rowlands A">AV Rowlands</name>
</author>
<author>
<name sortKey="Murray, P" uniqKey="Murray P">P Murray</name>
</author>
<author>
<name sortKey="Hurst, Tl" uniqKey="Hurst T">TL Hurst</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Mem Cognit</journal-id>
<journal-id journal-id-type="iso-abbrev">Mem Cognit</journal-id>
<journal-title-group>
<journal-title>Memory & Cognition</journal-title>
</journal-title-group>
<issn pub-type="ppub">0090-502X</issn>
<issn pub-type="epub">1532-5946</issn>
<publisher>
<publisher-name>Springer US</publisher-name>
<publisher-loc>New York</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26122757</article-id>
<article-id pub-id-type="pmc">4624826</article-id>
<article-id pub-id-type="publisher-id">531</article-id>
<article-id pub-id-type="doi">10.3758/s13421-015-0531-5</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>The speed of our mental soundtracks: Tracking the tempo of involuntary musical imagery in everyday life</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Jakubowski</surname>
<given-names>Kelly</given-names>
</name>
<address>
<phone>44-75-3003-5647</phone>
<email>k.jakubowski@gold.ac.uk</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Farrugia</surname>
<given-names>Nicolas</given-names>
</name>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Halpern</surname>
<given-names>Andrea R.</given-names>
</name>
<xref ref-type="aff" rid="Aff2"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sankarpandi</surname>
<given-names>Sathish K.</given-names>
</name>
<xref ref-type="aff" rid="Aff3"></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Stewart</surname>
<given-names>Lauren</given-names>
</name>
<address>
<phone>44-20-7919-7873</phone>
<email>l.stewart@gold.ac.uk</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<aff id="Aff1">
<label></label>
Department of Psychology, Goldsmiths, University of London, New Cross Road, New Cross, London, SE14 6NW UK</aff>
<aff id="Aff2">
<label></label>
Department of Psychology, Bucknell University, Lewisburg, PA USA</aff>
<aff id="Aff3">
<label></label>
School of Nursing and Health Sciences, University of Dundee, Dundee, UK</aff>
</contrib-group>
<pub-date pub-type="epub">
<day>30</day>
<month>6</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>30</day>
<month>6</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="ppub">
<year>2015</year>
</pub-date>
<volume>43</volume>
<issue>8</issue>
<fpage>1229</fpage>
<lpage>1242</lpage>
<permissions>
<copyright-statement>© The Author(s) 2015</copyright-statement>
<license license-type="OpenAccess">
<license-p>
<bold>Open Access</bold>
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.</license-p>
</license>
</permissions>
<abstract id="Abs1">
<p>The study of spontaneous and everyday cognitions is an area of rapidly growing interest. One of the most ubiquitous forms of spontaneous cognition is involuntary musical imagery (INMI), the involuntarily retrieved and repetitive mental replay of music. The present study introduced a novel method for capturing temporal features of INMI within a naturalistic setting. This method allowed for the investigation of two questions of interest to INMI researchers in a more objective way than previously possible, concerning (1) the precision of memory representations within INMI and (2) the interactions between INMI and concurrent affective state. Over the course of 4 days, INMI tempo was measured by asking participants to tap to the beat of their INMI with a wrist-worn accelerometer. Participants documented additional details regarding their INMI in a diary. Overall, the tempo of music within INMI was recalled from long-term memory in a highly veridical form, although with a regression to the mean for recalled tempo that parallels previous findings on voluntary musical imagery. A significant positive relationship was found between INMI tempo and subjective arousal, suggesting that INMI interacts with concurrent mood in a similar manner to perceived music. The results suggest several parallels between INMI and voluntary imagery, music perceptual processes, and other types of involuntary memories.</p>
</abstract>
<kwd-group xml:lang="en">
<title>Keywords</title>
<kwd>Music cognition</kwd>
<kwd>Imagery</kwd>
<kwd>Involuntary musical imagery</kwd>
<kwd>Involuntary memory</kwd>
<kwd>Spontaneous cognition</kwd>
<kwd>Tempo</kwd>
</kwd-group>
<custom-meta-group>
<custom-meta>
<meta-name>issue-copyright-statement</meta-name>
<meta-value>© Psychonomic Society, Inc. 2015</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<p>Empirical investigations of both non-volitional cognition and everyday thought processes have historically been neglected, due in part to the difficulty of harnessing these mental activities within a laboratory setting (Smallwood & Schooler,
<xref ref-type="bibr" rid="CR50">2006</xref>
,
<xref ref-type="bibr" rid="CR51">2015</xref>
). However, these mental processes comprise a substantial proportion of human cognition (McVay, Kane, & Kwapil,
<xref ref-type="bibr" rid="CR42">2009</xref>
) and provide an avenue for highly ecological research that can complement and extend traditional laboratory-based approaches. In recent years, the implementation of novel research designs has allowed researchers to begin to gain an understanding of spontaneous and naturalistic cognitions, including involuntary memories (e.g., Berntsen, Staugaard, & Sørensen,
<xref ref-type="bibr" rid="CR10">2013</xref>
), mind wandering (e.g., Killingsworth & Gilbert,
<xref ref-type="bibr" rid="CR31">2010</xref>
), and everyday thoughts within naturalistic settings (e.g., Hektner, Schmidt, & Csikszentmihalyi,
<xref ref-type="bibr" rid="CR25">2006</xref>
). The present study aims to further advance this area of research by introducing a new method for studying a type of everyday cognition related to music.</p>
<p>Involuntary musical imagery (INMI, or “earworms”) is the experience of a section of music coming into one’s mind involuntarily – without any intention to retrieve or recall the music – that immediately repeats at least once, without conscious effort to replay the music. Thus, INMI is characterized by two primary features: (1) it is recalled via associative and unplanned retrieval mechanisms, and (2) it is involuntarily repetitive in nature. These two characteristics serve to distinguish INMI from other related musical cognitions such as voluntary musical imagery, which is imagined music that is strategically retrieved (e.g., Zatorre & Halpern,
<xref ref-type="bibr" rid="CR60">2005</xref>
), musical “mind pops,” which comprise brief, single spontaneous appearances of a tune in the mind without repetition (e.g., Kvavilashvili & Anthony,
<xref ref-type="bibr" rid="CR32">2012</xref>
), and musical hallucinations, which are mental representations of musical sounds that are misattributed as originating from the external environment (e.g., Griffiths,
<xref ref-type="bibr" rid="CR21">2000</xref>
). The unplanned nature of retrieval from memory that is implicated in INMI also suggests some parallels between INMI and other types of involuntary memories. Indeed, INMI has been classified by some researchers as a type of involuntary semantic memory (Kvavilashvili & Mandler,
<xref ref-type="bibr" rid="CR33">2004</xref>
). However, INMI diverges from other involuntary memories in its high degree of repetitiveness, which is uncommon in most other involuntary memories. It has been suggested that this repetitiveness within INMI may stem from an exaggeration in the mind of the already highly repetitive nature of much of the music that exists in the Western world (Margulis,
<xref ref-type="bibr" rid="CR40">2014</xref>
).</p>
<p>INMI is a regularly occurring and widespread phenomenon in Western society. In a large-scale online survey, approximately 90 % of respondents reported that they experienced INMI at least once per week (Liikkanen,
<xref ref-type="bibr" rid="CR36">2012</xref>
). INMI has been reported in two studies as the most common type of involuntary semantic memory (Kvavilashvili & Mandler,
<xref ref-type="bibr" rid="CR33">2004</xref>
; Liikkanen,
<xref ref-type="bibr" rid="CR36">2012</xref>
), and reports of INMI experiences have been gathered from many countries across the globe (Liikkanen, Jakubowski, & Toivanen,
<xref ref-type="bibr" rid="CR37">in press</xref>
). As such, INMI presents valuable opportunities to investigate everyday, spontaneous cognition.</p>
<sec id="Sec1">
<title>Previous involuntary musical imagery (INMI) research</title>
<p>INMI generally comprises the repetitive looping of short fragments of music, rather than whole songs (Brown,
<xref ref-type="bibr" rid="CR11">2006</xref>
; Liikkanen,
<xref ref-type="bibr" rid="CR36">2012</xref>
), and is often subjectively reported to be an authentic mental replication of the musical content of the original tune (Brown,
<xref ref-type="bibr" rid="CR11">2006</xref>
; Williamson & Jilka,
<xref ref-type="bibr" rid="CR56">2013</xref>
). INMI episodes reported in one diary study ranged widely in duration, from 2 to 240 min, with a median duration of 36 min (Halpern & Bartlett,
<xref ref-type="bibr" rid="CR24">2011</xref>
). Another diary study reported a mean INMI episode duration of 27.25 min (Beaman & Williams,
<xref ref-type="bibr" rid="CR4">2010</xref>
). INMI can occur for a wide range of genres of music, including pop, rock, classical, children’s songs, TV jingles, and film music (Beaman & Williams,
<xref ref-type="bibr" rid="CR4">2010</xref>
; Halpern & Bartlett,
<xref ref-type="bibr" rid="CR24">2011</xref>
; Hyman et al.,
<xref ref-type="bibr" rid="CR27">2013</xref>
).</p>
<p>INMI often coincides with states of diffused attention, occurring frequently during housework, walking, or other routine tasks (Floridou & Müllensiefen,
<xref ref-type="bibr" rid="CR15">2015</xref>
; Hyman et al.,
<xref ref-type="bibr" rid="CR27">2013</xref>
; Williamson et al.,
<xref ref-type="bibr" rid="CR57">2012</xref>
), which is a similarity to other types of involuntary memories (Berntsen,
<xref ref-type="bibr" rid="CR8">1998</xref>
; Kvavilashvili & Mandler,
<xref ref-type="bibr" rid="CR33">2004</xref>
). In terms of evaluations of the experience, INMI is more often rated as emotionally positive or neutral than negative (Beaman & Williams,
<xref ref-type="bibr" rid="CR4">2010</xref>
; Halpern & Bartlett,
<xref ref-type="bibr" rid="CR24">2011</xref>
; Liikkanen,
<xref ref-type="bibr" rid="CR36">2012</xref>
). When INMI episodes do become troublesome or worrying, many people engage in both active and passive coping behaviors in an attempt to eradicate the unwanted INMI (Beaman & Williams,
<xref ref-type="bibr" rid="CR4">2010</xref>
; Williamson, Liikkanen, Jakubowski, & Stewart,
<xref ref-type="bibr" rid="CR58">2014</xref>
). Studies on individual differences have revealed links between INMI propensity and/or duration of INMI episodes and openness to experience, neuroticism, transliminality, schizotypy, obsessive-compulsive traits, and musical engagement (Beaman & Williams,
<xref ref-type="bibr" rid="CR5">2013</xref>
; Beaty et al.,
<xref ref-type="bibr" rid="CR6">2013</xref>
; Floridou, Williamson, & Müllensiefen,
<xref ref-type="bibr" rid="CR16">2012</xref>
; Müllensiefen, et al.,
<xref ref-type="bibr" rid="CR43">2014</xref>
; Wammes & Barušs,
<xref ref-type="bibr" rid="CR54">2009</xref>
).</p>
<p>One limitation of previous INMI research is that much of this evidence is based primarily on subjective reports regarding the inner experience of music. For instance, participants in some studies have verbally reported that INMI often represents a highly authentic mental replication of a familiar song (Brown,
<xref ref-type="bibr" rid="CR11">2006</xref>
; Williamson & Jilka,
<xref ref-type="bibr" rid="CR56">2013</xref>
), but researchers have not investigated the degree of precision with which the imagery replicates the original music in terms of pitch, tempo, lyrics, etc. The present study implemented a new method that quantitatively measured one musical feature of INMI – tempo – as it occurred during daily life by asking participants to tap to the beat of their INMI while wearing a wrist-worn accelerometer. By obtaining tempo information for individual INMI episodes, the present study was able to gain detailed insights into several questions of interest to researchers of INMI in a more objective manner than has previously been possible. Specifically, the research questions investigated in the present study concerned (1) the precision of memory representations within INMI, and (2) the interactions between INMI and concurrent affective state.</p>
</sec>
<sec id="Sec2">
<title>Investigating the precision of INMI tempo recall</title>
<p>The precision of
<italic>deliberately recalled</italic>
musical memories has previously been investigated in laboratory-based studies. These studies have suggested that the pitch and tempo of familiar music are generally recalled (1) highly veridically, in comparison to an original, standard version of a song (Frieler et al.,
<xref ref-type="bibr" rid="CR17">2013</xref>
; Levitin,
<xref ref-type="bibr" rid="CR34">1994</xref>
; Jakubowski, Farrugia, & Stewart,
<xref ref-type="bibr" rid="CR28">2014</xref>
; Levitin & Cook,
<xref ref-type="bibr" rid="CR35">1996</xref>
) and (2) highly consistently, in multiple trials across single participants (Halpern,
<xref ref-type="bibr" rid="CR22">1988</xref>
,
<xref ref-type="bibr" rid="CR23">1989</xref>
). In a study of voluntary musical imagery for tempo, Halpern (
<xref ref-type="bibr" rid="CR22">1988</xref>
) reported a significant correlation between the tempo at which participants set familiar tunes in a perceived music condition and the tempo at which they imagined the same tunes in an imagery condition. However, she also found evidence for a “regression to the mean” for imagined tempo, such that relatively slow songs tended to be imagined faster than their preferred perceived tempo, and relatively fast songs tended to be imagined slower than their preferred perceived tempo.</p>
<p>In the present study, the veridicality of
<italic>involuntarily recalled</italic>
, everyday occurrences of musical imagery was investigated for songs that exist in canonical (standard, recorded) versions by comparing INMI tempo measurements to the tempo of the original songs. Veridical representations of musical tempo within INMI would suggest a parallel between the memory mechanisms implemented in deliberately recalled music and spontaneous musical imagery occurring within a naturalistic setting. Such a finding would also provide links to other types of involuntary memory, as involuntary autobiographical memories have often been found to be as, or even more, specific and detailed in comparison to voluntary autobiographical memories (Berntsen,
<xref ref-type="bibr" rid="CR8">1998</xref>
; Mace,
<xref ref-type="bibr" rid="CR39">2006</xref>
; Schlagman & Kvavilashvili,
<xref ref-type="bibr" rid="CR48">2008</xref>
). However, if temporal information was not preserved with high veridicality during INMI, this could suggest that other elements, such as affective state, might play a role in influencing the stability of tempo within INMI. The present design also allowed for the investigation of two secondary questions regarding temporal veridicality: (1) the influence of recent hearing of a song on veridicality of tempo recall, and (2) whether evidence for a regression to the mean could be found for INMI, similar to that reported for voluntary imagery by Halpern (
<xref ref-type="bibr" rid="CR22">1988</xref>
).</p>
<p>The temporal consistency between multiple INMI episodes of the same song was explored in the present research by analyzing the tempi of songs that were repeatedly experienced as INMI within the same participant over the data collection period. Work by Byron and Fowles (
<xref ref-type="bibr" rid="CR12">2013</xref>
) has shown a quick exponential decay in the recurrence of the same song as INMI, thus the number of instances of recurrent INMI songs was expected to be few. Nevertheless, this exploratory analysis contributed to our overall investigation into the temporal stability of the INMI experience.</p>
</sec>
<sec id="Sec3">
<title>Investigating the relationship between INMI and affective states</title>
<p>If spontaneously recalled musical memories are found to rely on similar memory mechanisms to deliberately retrieved music, might they also serve similar functions to purposeful music selection? One of the most common uses of music within Western society is for mood regulation (Juslin & Laukka,
<xref ref-type="bibr" rid="CR30">2004</xref>
; Saarikallio & Erkkilä,
<xref ref-type="bibr" rid="CR46">2007</xref>
; Sloboda, O’Neill, & Ivaldi,
<xref ref-type="bibr" rid="CR49">2001</xref>
; Tarrant, North, & Hargreaves,
<xref ref-type="bibr" rid="CR53">2000</xref>
), and a handful of existing studies provide support for the idea that imagined music might also serve as a mood regulatory mechanism in the absence of an external music source. Participants in qualitative research have reported associations between their current mood and the type of INMI they experience (Williamson et al.,
<xref ref-type="bibr" rid="CR57">2012</xref>
), and diary-based methods have revealed that INMI are more frequent in more alert mood states (Bailes,
<xref ref-type="bibr" rid="CR3">2012</xref>
). Research on voluntary musical imagery has revealed parallels between the decoding of emotion in music perception and imagery (Lucas, Schubert, & Halpern,
<xref ref-type="bibr" rid="CR38">2010</xref>
), indicating that music can convey similar emotions whether it is imagined or heard aloud. Additionally, research from the involuntary autobiographical memory literature indicates that these types of memories are often more emotional than their deliberately recalled counterparts, suggesting that involuntary retrieval of memories might even enhance their emotional qualities (Berntsen & Hall,
<xref ref-type="bibr" rid="CR9">2004</xref>
). However, no previous evidence exists as to whether certain musical dimensions of the INMI experience might relate to specific mood constructs, in a similar fashion to the way in which different features, such as the tempo, musical mode, or texture, of a piece of music can elicit different emotional responses during music listening (Husain, Thompson, & Schellenberg,
<xref ref-type="bibr" rid="CR26">2002</xref>
; Webster & Weir,
<xref ref-type="bibr" rid="CR55">2005</xref>
). As such, the second main aim of the study was to use our newly developed measures for capturing the tempo of INMI in order to investigate how musical features of the INMI experience might relate to one’s concurrent affective state.</p>
<p>Hypotheses for this research question were based on previous findings regarding the relationships between features of perceived music and emotional response. Several previous studies have revealed a link between musical tempo and arousal. Listening to fast tempo music can increase subjective arousal (Husain et al.,
<xref ref-type="bibr" rid="CR26">2002</xref>
), fast tempo music is preferred in high arousal conditions such as exercise (Edworthy & Waring,
<xref ref-type="bibr" rid="CR13">2006</xref>
; North & Hargreaves,
<xref ref-type="bibr" rid="CR44">2000</xref>
), and physiological arousal can increase tempo judgments when participants are asked to indicate the tempo that “sounds right” for familiar, non-canonical songs (Jakubowski, Halpern, Grierson, & Stewart,
<xref ref-type="bibr" rid="CR29">2015</xref>
). As such, a positive relationship was predicted between subjective arousal and INMI tempo. Emotional valence appears to be less clearly related to musical tempo, but has been related to other features of music such as musical mode, i.e., major versus minor, such that the major mode is associated with positive emotional valence and the minor mode with negative valence (Gagnon & Peretz,
<xref ref-type="bibr" rid="CR18">2003</xref>
; Husain et al.,
<xref ref-type="bibr" rid="CR26">2002</xref>
; Webster & Weir,
<xref ref-type="bibr" rid="CR55">2005</xref>
). In accordance with this previous research, the musical mode for each reported INMI song was also determined, with the prediction that major mode INMI would co-occur with more positive emotions than minor mode INMI.</p>
</sec>
<sec id="Sec4">
<title>Summary of research questions</title>
<p>In summary, the present study employed a novel method to collect INMI tempo data during participants’ everyday activities over the course of 4 days. The acquired data were used to investigate two specific research questions. The first main question examined the precision of tempo recall within INMI, specifically in regard to veridicality and consistency. Evidence for veridical and consistent recall of the tempo of INMI would provide parallels to previous findings on deliberately recalled music (e.g., Halpern,
<xref ref-type="bibr" rid="CR22">1988</xref>
; Levitin & Cook,
<xref ref-type="bibr" rid="CR35">1996</xref>
). The second main question examined the relationships between musical features of INMI, specifically tempo and musical mode, and self-reported affective states, specifically in terms of arousal and valence. Evidence for relationships between INMI tempo and concurrent arousal and between INMI mode and emotional valence would provide parallels to previous findings on music listening (e.g., Edworthy & Waring,
<xref ref-type="bibr" rid="CR13">2006</xref>
; Husain et al.,
<xref ref-type="bibr" rid="CR26">2002</xref>
). The results of the study represent a first step towards an understanding of temporal aspects of INMI within daily life.</p>
</sec>
<sec id="Sec5" sec-type="materials|methods">
<title>Method</title>
<sec id="Sec6">
<title>Design</title>
<p>A naturalistic study was conducted in which participants (1) tapped to the beat of their INMI while wearing an accelerometer that recorded their movements and (2) recorded information about each INMI episode in a diary during their daily lives over a period of 4 days.</p>
</sec>
<sec id="Sec7">
<title>Participants</title>
<p>Participants were 17 volunteers (seven male), aged 20 to 34 years (
<italic>M</italic>
= 24.59,
<italic>SD</italic>
= 4.20). All participants were recruited on the basis that they reported experiencing earworms
<xref ref-type="fn" rid="Fn1">1</xref>
several times a day and were screened in advance in order to exclude any prospective participants who exhibited difficulties in tapping to the beat of musical imagery. The screening task involved tapping to the beat of familiar, voluntarily imagined songs in the laboratory. Any participant who was not an outlier
<xref ref-type="fn" rid="Fn2">2</xref>
on this task in terms of tapping variability was deemed eligible for inclusion in the study.</p>
<p>The sample comprised both musically trained and untrained participants. All participants received modest monetary compensation.</p>
</sec>
<sec id="Sec8">
<title>Ethics statement</title>
<p>The study protocol was approved by the ethics committee of Goldsmiths, University of London, UK. Written informed consent was obtained from all participants.</p>
</sec>
<sec id="Sec9">
<title>Materials</title>
<sec id="Sec10">
<title>Measuring INMI tempo</title>
<p>To record INMI tempo, a GeneActiv wrist-worn accelerometer was employed.
<xref ref-type="fn" rid="Fn3">3</xref>
This device resembles a wristwatch and is a noninvasive tool for measuring participants’ movement data throughout the day (Rowlands et al.,
<xref ref-type="bibr" rid="CR45">2014</xref>
; Zhang, Rowlands, Murray, & Hurst,
<xref ref-type="bibr" rid="CR62">2012</xref>
). Measurements for the present study were taken at the GeneActiv’s maximum sampling rate of 100 Hz. This maximum sampling rate imposed some limitations on the device’s ability to measure very fast tapping speeds. However, at the mean tempo of 100.9 beats per minutes (bpm) reported for INMI episodes in the present study the degree of uncertainty in the accelerometer measurement was only 1.6 bpm. Participants were asked to tap to the beat each time an earworm occurred while wearing the device.</p>
<p>In order to validate the viability of the GeneActiv device for measuring tapping data, a pilot study was conducted in which self-paced tapping (slow, medium, and fast tempi) was simultaneously measured by tapping on a laptop touchpad while wearing the accelerometer. Timing of tap onsets were registered by the laptop touchpad using MAX 6.1,
<xref ref-type="fn" rid="Fn4">4</xref>
and were also extracted from the accelerometer data using the analysis procedure described below (see
<xref rid="Sec15" ref-type="sec">
<italic>Tapping Data Analysis</italic>
</xref>
). Both tap onset time series were processed to calculate the tempo of the series in bpm. All tempi calculated with the accelerometer data were within 1 bpm of the tempi measured using the touchpad.</p>
</sec>
<sec id="Sec11">
<title>Self-report diary measures</title>
<p>A paper diary was given to participants for reporting on the occurrence and circumstances of each earworm experienced over the 4-day period. Each page comprised 11 questions pertaining to a single earworm episode; participants were asked to fill in this booklet once they had finished tapping to the beat of the earworm. Each page of the diary asked for the time and date of the episode, the time the diary questions were completed, the name, artist, and section of the earworm song, the last time the song was heard aloud, whether the episode occurred during a repetitive movement (e.g., walking/running), and information on internal or external events that might have triggered the earworm episode (see Appendix 1 for full set of questions). The categories for the question on how the earworm episodes were triggered were based on previous large-scale, qualitative research (Williamson et al.,
<xref ref-type="bibr" rid="CR57">2012</xref>
). The diary also comprised seven mood pairs used in previous musical imagery research by Bailes (
<xref ref-type="bibr" rid="CR1">2006</xref>
,
<xref ref-type="bibr" rid="CR2">2007</xref>
,
<xref ref-type="bibr" rid="CR3">2012</xref>
) that were adapted from a study of music in everyday life by Sloboda et al. (
<xref ref-type="bibr" rid="CR49">2001</xref>
). These seven mood pairs group into three factors: Positivity, Arousal, and Present Mindedness. Participants were asked to rate their mood on these seven scales in terms of the way they felt just before the earworm began.</p>
</sec>
</sec>
<sec id="Sec12">
<title>Procedure</title>
<p>Participants were asked to choose a 4-day block of time during which they felt they could complete the study most effectively. Each participant then met with the experimenter for approximately 15 min to receive the instructions and materials for the study. The experimenter provided a definition for the term “earworm,” as follows: “An earworm is a short section of music that comes into your mind without effort (it is involuntary; without any intention to retrieve or recall the music) and then repeats by itself (immediately repeated at least once, on a loop, without you consciously trying to replay the music).” Participants were instructed that whenever they experienced an earworm during the next 4 days, they were to tap along to the beat of the music as closely as possible to what they heard in their head while wearing the accelerometer device. They were asked to tap at least 20 times during each earworm episode. Examples of familiar songs (“Jingle Bells” and “Row, Row, Row Your Boat”) were provided to ensure that participants understood what was meant by the beat of the music (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
). The experimenter then demonstrated the tapping method, in which participants were asked to tap with their full forearm on their leg. The experimenter also showed the participants how to press the button on the accelerometer to serve as a marker of the end of each tapping episode. No button press was required at the start of the tapping episode so that participants could begin tapping as soon as they noticed an earworm, without impeding upon the spontaneous nature of the event. The experimenter asked each participant to test out both the tapping method and the button press in the laboratory. The experimenter then showed participants the paper diary and explained each question to ensure clarity.
<fig id="Fig1">
<label>Fig. 1</label>
<caption>
<p>Example text used to explain the meaning of a “beat” to participants. Bold and underlined syllables correspond to beats in the music</p>
</caption>
<graphic xlink:href="13421_2015_531_Fig1_HTML" id="MO1"></graphic>
</fig>
</p>
<p>Participants wore the accelerometer and carried the paper diary with them for a period of 4 days (96 hours). During this period, participants tapped to the beat of their earworms whenever possible and filled out the diary as soon as possible after the tapping period. They were debriefed as to the purposes of the experiment upon their return of the study materials.</p>
</sec>
<sec id="Sec13">
<title>Analysis</title>
<sec id="Sec14">
<title>Diary data analysis</title>
<p>Hand-written diary data was inputted into Microsoft Excel for further analysis in Excel and R. A total of 275 INMI episodes were reported in the diaries. Scores on each of the seven mood pairs were grouped into the three factors (Positivity, Arousal, and Present Mindedness) designated by the original authors of the mood scale (Sloboda et al.,
<xref ref-type="bibr" rid="CR49">2001</xref>
) and summed. Scores on reverse-scored items were recalculated before adding as necessary. The Positivity factor comprised the happy/sad and tense/relaxed mood pairs, Arousal comprised the alert/drowsy and energetic/tired pairs, and Present Mindedness comprised the interested/bored, involved/detached, and lonely/connected pairs.</p>
</sec>
<sec id="Sec15">
<title>Tapping data analysis</title>
<p>To isolate individual tapping episodes, each participant’s movement data was viewed within the Data Analysis feature of the GeneActiv software. Each episode was located using the time and date reported in the diary booklet, with the button press as a marker of the episode endpoint. The start of a tapping episode was detected by examining the 2 min preceding the button press and locating the onset of a sequence of successive acceleration peaks corresponding to repetitive tapping. Once an episode was isolated, it was saved for further analysis. No discernable corresponding tapping sequence was found in the accelerometer data for ten of the episodes reported in the diaries (3.64 % of the reported episodes). This could be due to a variety of different reasons, such as the participant forgetting to tap, writing down the incorrect time in the diary, or not tapping a discernable beat pattern.</p>
<p>Next, each INMI episode was analyzed using a tap detection algorithm in MATLAB (see Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
). The magnitude of the acceleration vector was computed as the square root of the sum of the three squared acceleration signals (x, y, and z). The resulting signal was smoothed using three passes of a running average filter in order to remove high frequency noise, and local maxima detection
<xref ref-type="fn" rid="Fn5">5</xref>
was performed on the smoothed signal. Detected maxima were considered as tap onsets if their absolute height was higher than a threshold; this threshold was defined as a ratio in relation to the highest maximum for the current tap sequence. The default threshold was set to 0.4, but was adjusted manually for each episode due to the fact that tapping strength and patterns varied greatly between and even within participants.
<fig id="Fig2">
<label>Fig. 2</label>
<caption>
<p>Graphical examples (from top to bottom) of (1) accelerometer movement data (minus the first ten excluded taps; circles denote local maxima), (2) series of corresponding inter-tap intervals, and (3) three individual taps from graph 1 (enlarged for clarity)</p>
</caption>
<graphic xlink:href="13421_2015_531_Fig2_HTML" id="MO2"></graphic>
</fig>
</p>
<p>For each file, the resultant tap series was then processed using the following steps. The first ten taps were excluded from analysis, in line with previous tapping literature (e.g., Benoit et al.,
<xref ref-type="bibr" rid="CR7">2014</xref>
; Sowiński & Dalla Bella,
<xref ref-type="bibr" rid="CR52">2013</xref>
; Zelaznik & Rosenbaum,
<xref ref-type="bibr" rid="CR61">2010</xref>
), and all numerical measurements were calculated based on the remaining taps. If there were fewer than ten remaining taps after excluding the first ten taps, this was recorded as a missing value, as the tapping period was deemed too short to extract a reliable tempo estimate. Overall, 30 INMI episodes were excluded on this basis (10.91 % of the total data).</p>
<p>Next, the time series of inter-tap intervals (ITI) was calculated as the difference between all successive tap onsets. This ITI series was further processed to remove artifacts and outliers (similar to the procedure used in Benoit et al.,
<xref ref-type="bibr" rid="CR7">2014</xref>
). Artifacts occur when two taps are registered in brief succession, and can originate from rebounds (e.g., two fingers or two parts of the wrist/hand hit the tapping surface in brief succession) or signal glitches. In this case, artifacts were defined as ITI values of less than 100 ms. Outliers were defined as ITI values greater than three times the interquantile range from the median value of the ITI series, and usually represented missing taps. Overall, the average percentage of outliers (outliers divided by total number of taps) across all usable tapping sequences was 2.6 %, and 55.7 % of the usable tapping sequences contained no outliers at all. Using the artifact- and outlier-free ITI series, an average ITI value, coefficient of variation (CV; a normalized measure of tapping variability defined as the standard deviation of the ITI series divided by the mean ITI), and tempo in beats per minute (bpm) were outputted for each episode for further analysis.</p>
<p>Finally, all remaining tapping episodes were visually inspected in a graphical format in MATLAB (see Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
). In this visual inspection stage, six episodes (2.18 % of the total data) were excluded on the basis of comprising a noisy signal without clearly discernable tapping peaks and one episode (0.36 % of the total data) was excluded on the basis of a participant halving the tempo in the middle of a tapping episode. Following these exclusion steps, 228 INMI episodes remained with usable tempo data (82.91 % of the total reported INMI episodes).</p>
</sec>
<sec id="Sec16">
<title>Mode data analysis</title>
<p>Two musicians were recruited to independently code the musical mode (i.e., major or minor) of each reported INMI episode. The coders followed a protocol resembling that of Schellenberg and von Scheve (
<xref ref-type="bibr" rid="CR47">2012</xref>
), who also hand-coded the mode of pop songs. The coders were required to find a recording of each INMI tune, listen specifically to the section of the song reported as INMI by the participant, and code the mode as major, minor, or ambiguous.
<xref ref-type="fn" rid="Fn6">6</xref>
One of the present authors then collated the independent ratings of the two coders and examined any discrepancies. For 25 episodes, the participant provided insufficient information to determine the mode of the INMI tune. For the remaining 250 INMI episodes, the mode of 203 episodes (81.2 % of the remaining data) was coded identically by the two coders. Episodes that were not coded identically were excluded from further mode-related analyses on the basis of being tonally ambiguous. Of the 203 episodes that were coded identically, 160 were in major keys and 43 were in minor keys.</p>
</sec>
</sec>
</sec>
<sec id="Sec17" sec-type="results">
<title>Results</title>
<p>Descriptive statistics related to the music experienced as INMI and the circumstances surrounding the experience will be reported first to provide context and opportunities for comparison to previous literature. Results pertaining to the two main research questions of the paper —regarding the precision of tempo recall within INMI and the relationships between INMI and affective state—will then be reported.</p>
<sec id="Sec18">
<title>The INMI experience: Descriptive statistics</title>
<p>Of the 275 INMI episodes reported in the diaries, the number of episodes reported per participant during the full 4-day period ranged from seven to 32 episodes; the number of INMI episodes reported during one day ranged from zero to ten. The median number of episodes reported per participant was 16, or approximately four episodes per day (median episodes per day = 3.5). As reported in the
<italic>Tapping Data Analysis</italic>
section, a discernible corresponding tapping sequence was found in the accelerometer data for 265 of the 275 reported episodes. As the co-occurrence of both a tapping sequence and a diary entry provide strong evidence that an INMI experience actually occurred at the time it was reported, the following descriptive statistics are reported for these 265 episodes.</p>
<p>In total, 182 unique songs were reported as INMI. For the vast majority of episodes, the song title and performer were reported, indicating that the songs were familiar to participants; however, for 11 episodes both the title and performer field were left blank. Two participants reported experiencing self-composed music as INMI for one and two episodes respectively. Other reported songs comprised a mix of pop, classical, rock, rap, jazz, folk, musical theatre, Christmas, TV, and children’s music. Only one song (“Barbie Girl” by Aqua) was reported by two different participants; other repetitions of INMI songs occurred only within the same participant. A total of 42 songs were reported at least twice by the same participant, 15 songs were reported three or more times, and two songs were reported six times.</p>
<p>Participants also reported on how long it had been since they had heard the song experienced as INMI played aloud, e.g., on an iPod, radio, live performance, etc. For 16.2 % of episodes, the song experienced as INMI had been heard less than 1 hour ago, and for 23.4 % of episodes the song had been heard less than 3 hours ago. However, for 40.0 % of episodes, participants reported that they had not heard a recording or performance of the song experienced as INMI in over 1 week.</p>
<p>A total of 67 INMI episodes (25.3 % of episodes) were experienced during a repetitive movement. The majority of these (57 episodes) occurred while walking and three episodes occurred while typing. The remaining repetitive movements, which comprised a single report each, were: brushing teeth, climbing, cutting vegetables, cycling, dying hair, washing dishes, and washing hair.</p>
<p>Finally, participants were asked to report any reasons they thought a song might have occurred as INMI (see Fig. 
<xref rid="Fig3" ref-type="fig">3</xref>
). Recent exposure to a song was named as a likely trigger for 40.4 % of INMI episodes. Association with an environmental trigger, such as a person, word, or sound was attributed as a potential cause in 15.1 % of episodes. For 27.6 % of episodes, participants reported, “I have no idea why this tune came into my head,” in the absence of any other trigger. Participants were also invited to record additional reasons an INMI episode might have occurred in response to an open question (if they ticked the box “Other”). “Other” reasons were reported for 16.6 % of episodes; however, a substantial proportion of these appeared to be instances where participants were providing additional details that still fit within existing categories. For instance, one participant wrote, “Aphex Twin is to release a new album on Sept. 22”, which could be classified in the category “I was thinking about a future event and related it to this song.” Additional recurring triggers reported in the “Other” category relate to features of a melody (N = 3; e.g., “it's a nice melody”) and mood states (N = 10; e.g., “maybe because I feel a bit tired, sitting at my desk makes me relax (it's a slow song)”).
<fig id="Fig3">
<label>Fig. 3</label>
<caption>
<p>Percentages of involuntary musical imagery (INMI) episodes for which each trigger was reported. (Note: As multiple triggers could be reported for each episode, these percentages total to over 100 %)</p>
</caption>
<graphic xlink:href="13421_2015_531_Fig3_HTML" id="MO3"></graphic>
</fig>
</p>
</sec>
<sec id="Sec19">
<title>The tempo of INMI: Descriptive statistics</title>
<p>Overall, 228 INMI episodes had usable tempo data (see
<italic>Tapping Data Analysis</italic>
section for data exclusion criteria). The number of total taps in each usable sequence ranged from 20 to 121 taps, and the duration of the tapping sequences ranged from 7.6 to 92.4 s (
<italic>M</italic>
= 23.6,
<italic>SD</italic>
= 10.1). These episodes ranged in tempo from 42.0 bpm to 196.5 bpm (see Fig. 
<xref rid="Fig4" ref-type="fig">4</xref>
). The mean tempo across all INMI episodes was 100.9 bpm (
<italic>SD</italic>
= 29.9; median = 98.5). The mean CV (coefficient of variation) of tapping across all 228 episodes was 0.06 (
<italic>SD</italic>
= 0.02; range = 0.02–0.13).
<fig id="Fig4">
<label>Fig. 4</label>
<caption>
<p>Tempo distribution of all 228 involuntary musical imagery (INMI) episodes with usable tempo data</p>
</caption>
<graphic xlink:href="13421_2015_531_Fig4_HTML" id="MO4"></graphic>
</fig>
</p>
<sec id="Sec20">
<title>Memory for INMI tempo: Veridicality and consistency</title>
<p>The next aim of the study was to investigate the veridicality of tempo recall within INMI, in comparison to the original version of each INMI tune. Of the 228 episodes with usable tempo data, 132 of these comprised INMI experiences of songs that exist in a canonical version. We define canonical songs as those that exist in one standard, recorded version. Examples of non-canonical songs include most Christmas songs and classical music, for which recordings might exist but no “definitive version” is apparent from which to obtain tempo information. The tapped tempo of each of these 132 INMI episodes was compared to the original, recorded tempo of the song by examining (1) ratios of the tapped to recorded tempo, (2) absolute deviations (as percentages) of the tapped tempo from the recorded tempo, and (3) the correlation between tapped and recorded tempo across all episodes (see Fig. 
<xref rid="Fig5" ref-type="fig">5</xref>
). A ratio of 1 or an absolute deviation of 0 % for a particular episode would indicate that a participant tapped at the same tempo as the recorded version of the song.
<fig id="Fig5">
<label>Fig. 5</label>
<caption>
<p>Original, recorded tempo for each of the 132 songs that exist in canonical versions plotted against the tempo each song was tapped at when experienced as involuntary musical imagery (INMI)</p>
</caption>
<graphic xlink:href="13421_2015_531_Fig5_HTML" id="MO5"></graphic>
</fig>
</p>
<p>Some extreme ratios of the tapped to recorded tempi likely represented participants halving or doubling the tempo of a song, i.e., tapping at a different metrical subdivision. In accordance with other previous research (Halpern,
<xref ref-type="bibr" rid="CR22">1988</xref>
; Jakubowski et al.,
<xref ref-type="bibr" rid="CR28">2014</xref>
), episodes where the ratio of tapped to recorded tempo was 1.9 or greater or 0.6 or less were omitted, given the likelihood of participants having doubled or halved the song tempo. This resulted in the exclusion of 17 episodes, leaving 115 episodes for further analysis.</p>
<p>The mean ratio of tapped to recorded tempo for these 115 episodes was 0.98 (
<italic>SD</italic>
= 0.15, median = 0.97) and the mean absolute deviation of the tapped tempo from the original tempo was 10.8 % (
<italic>SD</italic>
= 10.8 %; median = 7.9 %). A highly significant correlation was also found between the tapped and original tempi,
<italic>r</italic>
= 0.77,
<italic>p</italic>
< .001. Overall, 59.1 % of songs were recalled within 10 % of the original recorded tempo and 77.4 % of songs were recalled within 15 % of the original tempo.</p>
<p>Next, it was investigated whether recency of hearing a song might influence INMI veridicality, such that songs heard aloud more recently might be recalled more accurately in terms of tempo. The data were split into two approximately equal subsets: songs heard within the past week (N = 64) and songs heard over one week ago (N = 51). A Wilcoxon rank-sum test was employed (due to non-normal distributions in the data) to compare these two subsets of the data in terms of their absolute deviations from the original tempo. The result of the test was non-significant,
<italic>W</italic>
= 1399,
<italic>p</italic>
= .19.</p>
<p>A final question into the veridicality of INMI was related to previous research in which Halpern (
<xref ref-type="bibr" rid="CR22">1988</xref>
) reported a regression to the mean for the tempo of voluntarily imagined songs. The present data allowed for the first exploration into whether a regression to the mean might also be present for recalled tempo within INMI. An independent-samples
<italic>t</italic>
-test was performed to compare the original, recorded tempo between songs that were recalled slower than the recorded tempo within INMI (those songs with ratios of tapped tempo to recorded tempo of less than 1) and songs recalled faster than the recorded tempo (songs with ratios greater than 1). The mean recorded tempo for songs that were recalled slower than the original tempo was significantly faster than that of songs that were recalled faster than the original tempo,
<italic>t</italic>
(107) = 2.71,
<italic>p</italic>
= .01, thus suggesting regression to the mean within INMI.</p>
<p>Finally, the consistency of tempo recall for recurrences of the same tune as INMI was examined using the eight songs with usable tempo data that were experienced by the same participant at least three times. It should be noted that, unlike in the veridicality analyses, this sample of eight songs included both canonical and non-canonical songs. The mean tempo difference between the slowest and fastest version of a song experienced as INMI was 19.6 % (
<italic>SD</italic>
= 14.0 %, median = 14.6 %). When comparing the slowest and fastest rendition of each song, two of these eight songs differed in tempo by over 40 %. The remaining six songs differed in tempo by less than 20 %, and five songs differed by less than 15 % (see Table 
<xref rid="Tab1" ref-type="table">1</xref>
).
<table-wrap id="Tab1">
<label>Table 1</label>
<caption>
<p>Consistency of tempi for songs experienced as involuntary musical imagery (INMI) at least three times</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th>Song</th>
<th>Number of INMI episodes</th>
<th>Slowest tempo (bpm)</th>
<th>Fastest tempo (bpm)</th>
<th>Difference between slowest and fastest version (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>A Sky Full of Stars</td>
<td>3</td>
<td>82.5</td>
<td>116.9</td>
<td>41.7</td>
</tr>
<tr>
<td>Miss You</td>
<td>3</td>
<td>98.5</td>
<td>112.7</td>
<td>14.4</td>
</tr>
<tr>
<td>Ponta de Areia</td>
<td>3</td>
<td>63.0</td>
<td>74.3</td>
<td>18.0</td>
</tr>
<tr>
<td>Spirited Away One Summer's Day</td>
<td>3</td>
<td>78.6</td>
<td>82.6</td>
<td>5.1</td>
</tr>
<tr>
<td>One</td>
<td>4</td>
<td>81.4</td>
<td>93.2</td>
<td>14.4</td>
</tr>
<tr>
<td>You're So Vain</td>
<td>5</td>
<td>111.2</td>
<td>119.6</td>
<td>7.5</td>
</tr>
<tr>
<td>For Unto Us a Child is Born</td>
<td>6</td>
<td>74.6</td>
<td>105.0</td>
<td>40.6</td>
</tr>
<tr>
<td>Non Voglio Cantare</td>
<td>6</td>
<td>117.4</td>
<td>134.8</td>
<td>14.8</td>
</tr>
</tbody>
</table>
</table-wrap>
</p>
</sec>
</sec>
<sec id="Sec21">
<title>Musical features of INMI and affective states</title>
<p>The second main research question investigated the relationship between participants’ affective states and specific musical features of their concurrent INMI. Specifically, (1) a positive relationship was predicted between the Arousal dimension of the mood scale and the tempo of INMI and (2) the Positivity dimension of participants’ mood ratings was predicted to be higher during INMI in the major versus the minor mode.</p>
<p>The correlations between INMI tempo, INMI mode, Arousal, and Positivity are reported in Table 
<xref rid="Tab2" ref-type="table">2</xref>
. Point-biserial correlations were calculated for the INMI mode variable due to its dichotomous nature; all other reported correlations are Pearson’s correlations.
<table-wrap id="Tab2">
<label>Table 2</label>
<caption>
<p>Correlations of musical features of involuntary musical imagery (INMI) and mood variables</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th></th>
<th>Tempo</th>
<th>Mode</th>
<th>Arousal</th>
<th>Positivity</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tempo</td>
<td>1.00</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Mode</td>
<td>–.003</td>
<td>1.00</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Arousal</td>
<td>.14*</td>
<td>.10</td>
<td>1.00</td>
<td></td>
</tr>
<tr>
<td>Positivity</td>
<td>.15*</td>
<td>–.09</td>
<td>.07</td>
<td>1.00</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>Note</italic>
* signifies a significant correlation at the level of
<italic>p</italic>
< .05. Coding for the INMI mode variable is: 1=minor, 2=major</p>
</table-wrap-foot>
</table-wrap>
</p>
<p>A linear mixed-effect model was fitted with both Arousal and Positivity as predictors of INMI tempo. A mixed-effects model was employed in order to take account of the individual variations among participants and the multiple observations recorded from each participant by including “Participant” as a random effect in the model. In this model, Arousal was a significant positive predictor of INMI tempo and no significant relationship was found between Positivity and INMI tempo. The previously non-significant effect of Positivity was then removed and the model was refitted with only Arousal as a predictor of INMI tempo. Arousal was again a significant predictor and the reduced model provided a better model fit, based on the Bayesian Information Criterion (BIC), than the full model with both mood variables included as predictors (see Table 
<xref rid="Tab3" ref-type="table">3</xref>
). A pseudo-
<italic>R</italic>
<sup>2</sup>
value of 0.34 was obtained for the effect of arousal on INMI tempo by computing the squared the correlation between the INMI tempo values predicted from the mixed-effects model and the observed values of INMI tempo.
<table-wrap id="Tab3">
<label>Table 3</label>
<caption>
<p>Linear mixed-effects models with mood variables as predictors of
<italic>INMI</italic>
tempo</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th></th>
<th>Coefficient</th>
<th>S.E.</th>
<th>
<italic>t</italic>
-value</th>
<th>
<italic>p</italic>
-value</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="5">Model 1: Arousal and Positivity as predictors of INMI tempo</td>
</tr>
<tr>
<td> Intercept</td>
<td>81.24</td>
<td>12.99</td>
<td>6.26</td>
<td>< .001</td>
</tr>
<tr>
<td> Arousal</td>
<td>1.87</td>
<td>0.67</td>
<td>2.78</td>
<td>.01*</td>
</tr>
<tr>
<td> Positivity</td>
<td>0.08</td>
<td>1.11</td>
<td>0.08</td>
<td>.94</td>
</tr>
<tr>
<td colspan="5">Model 2: Arousal as a predictor of INMI tempo</td>
</tr>
<tr>
<td> Intercept</td>
<td>82.06</td>
<td>7.23</td>
<td>11.35</td>
<td>< .001</td>
</tr>
<tr>
<td> Arousal</td>
<td>1.87</td>
<td>0.67</td>
<td>2.79</td>
<td>.01**</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>* signifies a significant predictor at the level of
<italic>p</italic>
< .05. BIC = 2163.98</p>
<p>** signifies a significant predictor at the level of
<italic>p</italic>
< .05. BIC = 2160.62</p>
</table-wrap-foot>
</table-wrap>
</p>
<p>A second mixed-effects analysis was conducted with Arousal and Positivity as predictors of INMI mode. A binomial mixed-effects model was fit due to the binary nature of the INMI mode variable. Neither of the mood variables were significant predictors of INMI mode (see Table 
<xref rid="Tab4" ref-type="table">4</xref>
).
<table-wrap id="Tab4">
<label>Table 4</label>
<caption>
<p>Linear mixed-effects model with mood variables as predictors of INMI mode</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th></th>
<th>Coefficient</th>
<th>S.E.</th>
<th>
<italic>z</italic>
-value</th>
<th>
<italic>p</italic>
-value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Intercept</td>
<td>1.47</td>
<td>1.43</td>
<td>1.03</td>
<td>.30</td>
</tr>
<tr>
<td>Arousal</td>
<td>0.12</td>
<td>0.07</td>
<td>1.67</td>
<td>.10</td>
</tr>
<tr>
<td>Positivity</td>
<td>–0.10</td>
<td>0.13</td>
<td>–0.76</td>
<td>.45</td>
</tr>
</tbody>
</table>
</table-wrap>
</p>
</sec>
</sec>
<sec id="Sec22" sec-type="discussion">
<title>Discussion</title>
<p>The present study has contributed a number of novel results, demonstrating that the combination of naturalistic diary methods with a quantitative measurement device – in this case an accelerometer – can add a new dimension to research on ephemeral phenomena such as INMI. These data represent, to our knowledge, the first attempt towards an objective marker related to the occurrence and tempo of INMI during everyday life. Despite the lesser degree of experimental control over the research environment as compared to a laboratory setting, over 80 % of the acquired tapping data was usable for analysis. A wide range of INMI tempi, from approximately 40 to 200 bpm, provided a rich source of data that suggest a wide variety of personal inner music experiences.</p>
<sec id="Sec23">
<title>The precision of INMI tempo recall</title>
<p>INMI for music that exists in a canonical version was generally experienced at a tempo very close to the veridical tempo of the song. In previous research on absolute memory for musical tempo, Levitin & Cook (
<xref ref-type="bibr" rid="CR35">1996</xref>
) asked participants to sing self-selected, familiar pop songs and reported that 72 % of trials were within 8 % of the original, recorded tempo. Jakubowski et al. (
<xref ref-type="bibr" rid="CR28">2014</xref>
) asked participants to deliberately imagine and tap to the beat of familiar pop songs and reported a mean absolute deviation from the original recorded tempo of 17.3 %. In the present study, the mean absolute deviation from the original tempo for INMI episodes was 10.8 % and 77.4 % of songs were recalled within 15 % of the original tempo. These figures suggest that tempo representations within INMI may be equally or even more veridical than those within musical imagery that is deliberately imagined in a laboratory context. Both forms of imagery appear to be less temporally veridical than songs produced in a sung recall paradigm (Levitin & Cook,
<xref ref-type="bibr" rid="CR35">1996</xref>
). However, as Levitin and Cook asked participants to sing only two self-selected songs that they knew very well, familiarity or overlearning may have played a role in the higher level of veridical recall observed in their study. Veridicality of tempo recall within the present study may also have been influenced negatively if participants had heard other versions of the songs they reported as INMI. However, as participants were asked to report the performer of the version of the tune they were experiencing as INMI, cover versions should have been reported as such.</p>
<p>Overall, the finding of high temporal veridicality within INMI is particularly striking given that (1) the songs reported as INMI were recalled spontaneously with no instruction for veridical recall
<xref ref-type="fn" rid="Fn7">7</xref>
and (2) the INMI occurred within the context of the external distractions of everyday life. The high veridicality of tempo within INMI suggests a parallel between involuntarily and voluntarily recalled musical memories. This finding also suggests a parallel to involuntary autobiographical memories, which have a tendency to be even
<italic>more</italic>
specific and detailed than voluntary autobiographical memories (Berntsen,
<xref ref-type="bibr" rid="CR8">1998</xref>
; Mace,
<xref ref-type="bibr" rid="CR39">2006</xref>
; Schlagman & Kvavilashvili,
<xref ref-type="bibr" rid="CR48">2008</xref>
). Future studies that directly compare involuntarily and voluntarily generated musical memories within the same participants could shed further light on whether INMI may also be more veridical or vivid in some cases than voluntary musical imagery.</p>
<p>In 40 % of INMI episodes in the present study, participants reported that they had not heard the song that was experienced as INMI in over one week. However, INMI for these songs was not experienced at a less veridical tempo than INMI for songs that had been heard more recently. This finding suggests that INMI that are recalled from long-term memory are temporally precise, and that the overall finding of high veridicality for INMI tempo is not explained solely by those episodes for which an INMI song was heard minutes ago on the radio and might still be held in short-term memory.</p>
<p>Evidence was also found for a regression to the mean for INMI tempo, such that faster songs tended to be recalled slower than their original tempi and slower songs tended to be recalled faster than their original tempi. This parallels previous findings on tempo for voluntary musical imagery (Halpern,
<xref ref-type="bibr" rid="CR22">1988</xref>
). Further research should be conducted to explore the mechanisms underlying this regression toward a mid-range tempo in both spontaneous and deliberate musical imagery, and whether this tendency is related to factors such as one’s natural, spontaneous tapping rate or preferred perceptual tempo (e.g., McAuley et al.,
<xref ref-type="bibr" rid="CR41">2006</xref>
).</p>
<p>An exploratory analysis was conducted to examine the temporal consistency for the eight INMI tunes with usable tempo data that were reported at least three times. The majority of these INMI tunes differed in tempo between the slowest and fastest rendition by less than 20 %, however two songs varied in tempo by approximately 40 %. For one of these less consistent songs (“For Unto Us a Child is Born”), the participant who reported this song stated in the post-experiment debrief session that he was a pianist who often practiced musical pieces at different tempi when learning them, and that this seemed to influence his subsequent INMI tempi. This is just one example of a variety of reasons that might affect the consistency of INMI tempi that could not be accounted for in the present study. In future research, it may be fruitful to employ a design that aims specifically to examine the issue of temporal consistency. Such a design might involve priming participants with a recording of a song for which only one version exists and asking participants to record the tempo of all subsequent INMI episodes related to that particular song. Controlling the exposure phase of the INMI tune would consequently provide more control over the version of the song that comes to participants’ minds as INMI.</p>
<p>One potential limitation of the present research design is that, as the study was completed outside of the experimenter’s supervision, participants could not be prevented from voluntarily manipulating the tempo of their INMI. However, the present study aimed to combat this issue in several ways: (1) by providing a clear definition of “earworms” to participants that emphasized their involuntary recall as a key feature, (2) by not requiring any sort of button press or marker
<italic>before</italic>
the tapping period, so as to make it easy to start tapping as soon as possible when an INMI episode began, (3) by not revealing the purposes of the study until after it was completed, and (4) by instructing participants specifically to “please tap the beat of the tune as closely as possible as to what you hear in your head.” This last point is similar to the instructions utilized by Ericsson and Simon (
<xref ref-type="bibr" rid="CR14">1993</xref>
) in the “think-aloud” method, which aims to capture participants’ current thoughts from within working memory, rather than their interpretation or justification of these thoughts.</p>
</sec>
<sec id="Sec24">
<title>The relationship between INMI and affective states</title>
<p>Examining the relationships between musical features of INMI and concurrent affective states allowed the present research to begin to unravel the complex question of how endogenous bodily and mental states may interact with spontaneous cognitions. The present results revealed a modest yet significant positive relationship between subjective arousal and INMI tempo. This parallels and strengthens findings on the relationship between musical tempo and arousal in the domain of music listening (Edworthy & Waring,
<xref ref-type="bibr" rid="CR13">2006</xref>
; Husain et al.,
<xref ref-type="bibr" rid="CR26">2002</xref>
; North & Hargreaves,
<xref ref-type="bibr" rid="CR44">2000</xref>
) by demonstrating that even music that is experienced only as imagery (i.e., not heard aloud)
<italic>and</italic>
spontaneously generated displays this significant tempo-arousal relationship. It is not possible to deduce from these results a direction of causality of this effect. However, previous studies of perceived music suggest a bidirectional relationship, such that tempo can influence one’s arousal level (e.g., Husain et al.,
<xref ref-type="bibr" rid="CR26">2002</xref>
) and arousal can influence one’s tempo preferences (North & Hargreaves,
<xref ref-type="bibr" rid="CR44">2000</xref>
) or the tempo at which a piece of music sounds “right” (Jakubowski et al.,
<xref ref-type="bibr" rid="CR29">2015</xref>
). As such, a similar bidirectional relationship might exist between musical tempo recalled with INMI and concurrent arousal.</p>
<p>No significant relationship was found between INMI mode and positivity of mood. This was unexpected, given previous evidence of an association between emotional valence and musical mode (Gagnon & Peretz,
<xref ref-type="bibr" rid="CR18">2003</xref>
; Husain et al.,
<xref ref-type="bibr" rid="CR26">2002</xref>
; Webster & Weir,
<xref ref-type="bibr" rid="CR55">2005</xref>
). One potential explanation is that musical mode may be a less prominent or distinctive feature of INMI in comparison to other musical features of the imagery. INMI tempo might be a more salient feature of the imagery than musical mode because INMI tempo is more likely to influence motor responses or, conversely, to be influenced by movements that were occurring before the INMI episode began, thus making it more observable to one’s conscious awareness. This idea is supported by the evidence of a close relationship between musical beat perception and motor areas of the brain (Grahn & Brett,
<xref ref-type="bibr" rid="CR19">2007</xref>
; Grahn & Rowe,
<xref ref-type="bibr" rid="CR20">2009</xref>
), as well as by the finding that approximately one quarter of INMI occurred during a repetitive movement in the present study. The study instructions, in which participants were asked to move to the beat of the music by tapping, may have also increased the salience of INMI tempo over other musical features.</p>
<p>The relationship between INMI tempo and subjective arousal suggests some first evidence that INMI might be functionally linked to mood regulation in a similar manner to perceived music. That is, in the absence of an iPod or other music-generating device, spontaneous imagery for music may be able to fill in as a mood regulatory mechanism. Further research should be conducted to examine other musical dimensions of INMI (e.g., lyrical content, timbre, loudness) in order to gain a more complete understanding of the possible relationships between different features of the music experienced as INMI and concurrent mood.</p>
</sec>
<sec id="Sec25">
<title>The INMI experience: Descriptive findings</title>
<p>Finally, several descriptive findings from the diary data corroborated and extended previous research. The diverse variety of songs reported (182 in total) and the lack of overlap in INMI tunes between participants is in line with several previous studies, suggesting that almost any song can become INMI and that the INMI experience is highly personal and idiosyncratic (Beaman & Williams,
<xref ref-type="bibr" rid="CR4">2010</xref>
; Williamson & Müllensiefen,
<xref ref-type="bibr" rid="CR59">2012</xref>
). The phenomenon has also been found to be short-lived (Byron & Fowles,
<xref ref-type="bibr" rid="CR12">2013</xref>
), as affirmed by the relatively few songs reported three or more times as INMI within each participant.</p>
<p>The present study also served as one of the first diary-based investigations into the triggers of the INMI experience. Recent exposure to music was the most commonly reported INMI trigger, which is in line with a retrospective questionnaire-based method employed by Williamson et al. (
<xref ref-type="bibr" rid="CR57">2012</xref>
). However, it should be noted that, as the present study also included a question about how recently a participant had heard a song aloud, participants might have been somewhat primed towards reporting recent exposure to a song as an INMI trigger. The study also revealed that conscious awareness of an external or internal trigger for the experience was absent in about one quarter of INMI episodes, those in which a participant reported, “I have no idea why this tune came into my head.” Additionally, ten reports of an INMI episode being triggered in relation to a mood state were collected. In subsequent larger-scale studies, an investigation could be conducted into whether these specifically mood-triggered INMI display a stronger relationship between mood ratings and the musical features of INMI than INMI activated by other sources.</p>
<p>Finally, data on concurrent movements during INMI indicated that approximately one quarter of INMI episodes occurred during a repetitive movement, such as walking. These data provide some impetus for future accelerometer-based research, in which one could investigate how concurrent movement and INMI tempo might influence one another, e.g., if INMI tempo changes to match a movement or if a movement becomes more regular when it is made in time with INMI, by comparing acceleration patterns before and during INMI. Such research could provide valuable insights into the interactions between musical imagery and the sensorimotor system.</p>
</sec>
</sec>
<sec id="Sec26" sec-type="conclusion">
<title>Conclusion</title>
<p>The present paper has introduced a novel methodology that opens new avenues for research into dynamic aspects of spontaneous and self-generated thoughts. INMI is an example of a spontaneous cognition with several highly measurable features. The present study specifically investigated tempo-related aspects of the INMI experience within the course of everyday life.</p>
<p>The results of the study demonstrated that INMI is often a highly veridical experience in terms of tempo, even when a song experienced as INMI has not recently been heard aloud. The results also revealed a significant positive relationship between subjective arousal and INMI tempo, suggesting a first link between spontaneous musical imagery and mood that parallels findings in the perceived music domain. Research into temporal aspects of INMI and involuntary memories in general opens an array of important questions that will help further our understanding of endogenous thought processes.</p>
</sec>
</body>
<back>
<app-group>
<app id="App1">
<sec id="Sec27">
<title>Appendix 1: Earworm diary questions</title>
<p>
<graphic position="anchor" xlink:href="13421_2015_531_Figa_HTML.gif" id="MO6"></graphic>
</p>
</sec>
</app>
</app-group>
<fn-group>
<fn id="Fn1">
<label>1</label>
<p>Although the term
<italic>involuntary musical imagery</italic>
(
<italic>INMI</italic>
) is used throughout this paper as a more complete description of the phenomenon, the term
<italic>earworm</italic>
was used in all instructions given to participants, as this represents a more familiar and colloquial term. Hence, the term
<italic>earworm</italic>
will be used in this paper only when describing the instructions given to participants.</p>
</fn>
<fn id="Fn2">
<label>2</label>
<p>Outliers were defined as data points more than 1.5 interquartile ranges below the first quartile or above the third quartile.</p>
</fn>
<fn id="Fn3">
<label>3</label>
<p>
<ext-link ext-link-type="uri" xlink:href="http://www.geneactiv.org/">http://www.geneactiv.org/</ext-link>
</p>
</fn>
<fn id="Fn4">
<label>4</label>
<p>Tap onset times were recorded by detecting a change in the status of the touchpad using the output of the fingerpinger object for MAX (available at
<ext-link ext-link-type="uri" xlink:href="http://www.anyma.ch/2009/research/multitouch-external-for-maxmsp/">http://www.anyma.ch/2009/research/multitouch-external-for-maxmsp/</ext-link>
).</p>
</fn>
<fn id="Fn5">
<label>5</label>
<p>Both of these steps were implemented using the lmax function, available at
<ext-link ext-link-type="uri" xlink:href="http://www.mathworks.com/matlabcentral/fileexchange/3170-local-min-max-nearest-neighbour/content/lmax.m">http://www.mathworks.com/matlabcentral/fileexchange/3170-local-min-max-nearest-neighbour/content/lmax.m</ext-link>
.</p>
</fn>
<fn id="Fn6">
<label>6</label>
<p>In the case of songs with no definitive version, such as classical music and Christmas songs, the coders were requested to find one standard recording, as it was not presumed that mode would change between different recorded versions of the song.</p>
</fn>
<fn id="Fn7">
<label>7</label>
<p>This is in contrast to both laboratory studies reported above (Jakubowski et al.,
<xref ref-type="bibr" rid="CR29">2014</xref>
; Levitin & Cook,
<xref ref-type="bibr" rid="CR35">1996</xref>
), who asked participants to recall each song as closely as possible to the original version.</p>
</fn>
</fn-group>
<ack>
<p>A special thanks to Suzanne Capps and Jessica McKenzie for their assistance in coding the musical mode data, and thanks to Daniel Müllensiefen for advice on the statistical analysis. A.H. was partially supported by a Leverhulme Visiting Professorship during a sabbatical residency. This study was funded by a grant from the Leverhulme Trust, reference RPG-297, awarded to author L.S.</p>
</ack>
<ref-list id="Bib1">
<title>References</title>
<ref id="CR1">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bailes</surname>
<given-names>FA</given-names>
</name>
</person-group>
<article-title>The use of experience-sampling methods to monitor musical imagery in everyday life</article-title>
<source>Musicae Scientiae</source>
<year>2006</year>
<volume>10</volume>
<issue>2</issue>
<fpage>173</fpage>
<lpage>190</lpage>
<pub-id pub-id-type="doi">10.1177/102986490601000202</pub-id>
</element-citation>
</ref>
<ref id="CR2">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>The prevalence and nature of imagined music in the everyday lives of music students</article-title>
<source>Psychology of Music</source>
<year>2007</year>
<volume>35</volume>
<issue>4</issue>
<fpage>555</fpage>
<lpage>570</lpage>
<pub-id pub-id-type="doi">10.1177/0305735607077834</pub-id>
</element-citation>
</ref>
<ref id="CR3">
<mixed-citation publication-type="other">Bailes, F. (2012). Arousal, valence, and the involuntary musical image. In: Cambouropoulos, E., Tsougras, C., Mavromatis, K., Pastiadis, K., editors.
<italic>Proceedings of ICMPC-ESCOM 12</italic>
(Thessaloniki, Greece), 86.</mixed-citation>
</ref>
<ref id="CR4">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beaman</surname>
<given-names>CP</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>TI</given-names>
</name>
</person-group>
<article-title>Earworms (stuck song syndrome): Towards a natural history of intrusive thoughts</article-title>
<source>British Journal of Psychology</source>
<year>2010</year>
<volume>101</volume>
<issue>4</issue>
<fpage>637</fpage>
<lpage>653</lpage>
<pub-id pub-id-type="doi">10.1348/000712609X479636</pub-id>
<pub-id pub-id-type="pmid">19948084</pub-id>
</element-citation>
</ref>
<ref id="CR5">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beaman</surname>
<given-names>CP</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>TI</given-names>
</name>
</person-group>
<article-title>Individual differences in mental control predict involuntary musical imagery</article-title>
<source>Musicae Scientiae</source>
<year>2013</year>
<volume>17</volume>
<issue>4</issue>
<fpage>398</fpage>
<lpage>409</lpage>
<pub-id pub-id-type="doi">10.1177/1029864913492530</pub-id>
</element-citation>
</ref>
<ref id="CR6">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beaty</surname>
<given-names>RE</given-names>
</name>
<name>
<surname>Burgin</surname>
<given-names>CJ</given-names>
</name>
<name>
<surname>Nusbaum</surname>
<given-names>EC</given-names>
</name>
<name>
<surname>Kwapil</surname>
<given-names>TR</given-names>
</name>
<name>
<surname>Hodges</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Silvia</surname>
<given-names>PJ</given-names>
</name>
</person-group>
<article-title>Music to the inner ears: Exploring individual differences in musical imagery</article-title>
<source>Consciousness & Cognition</source>
<year>2013</year>
<volume>22</volume>
<issue>4</issue>
<fpage>1163</fpage>
<lpage>1173</lpage>
<pub-id pub-id-type="doi">10.1016/j.concog.2013.07.006</pub-id>
<pub-id pub-id-type="pmid">24021845</pub-id>
</element-citation>
</ref>
<ref id="CR7">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Benoit</surname>
<given-names>CE</given-names>
</name>
<name>
<surname>Dalla Bella</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Farrugia</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Obrig</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Mainka</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kotz</surname>
<given-names>SA</given-names>
</name>
</person-group>
<article-title>Musically cued gait-training improves both perceptual and motor timing in Parkinson’s disease</article-title>
<source>Frontiers in Human Neuroscience</source>
<year>2014</year>
<volume>8</volume>
<fpage>494</fpage>
<pub-id pub-id-type="doi">10.3389/fnhum.2014.00494</pub-id>
<pub-id pub-id-type="pmid">25071522</pub-id>
</element-citation>
</ref>
<ref id="CR8">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Berntsen</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Voluntary and involuntary access to autobiographical memory</article-title>
<source>Memory</source>
<year>1998</year>
<volume>6</volume>
<fpage>113</fpage>
<lpage>141</lpage>
<pub-id pub-id-type="doi">10.1080/741942071</pub-id>
<pub-id pub-id-type="pmid">9640425</pub-id>
</element-citation>
</ref>
<ref id="CR9">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Berntsen</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Hall</surname>
<given-names>NM</given-names>
</name>
</person-group>
<article-title>The episodic nature of involuntary autobiographical memories</article-title>
<source>Memory & Cognition</source>
<year>2004</year>
<volume>32</volume>
<issue>5</issue>
<fpage>789</fpage>
<lpage>803</lpage>
<pub-id pub-id-type="doi">10.3758/BF03195869</pub-id>
<pub-id pub-id-type="pmid">15552356</pub-id>
</element-citation>
</ref>
<ref id="CR10">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Berntsen</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Staugaard</surname>
<given-names>SR</given-names>
</name>
<name>
<surname>Sørensen</surname>
<given-names>LM</given-names>
</name>
</person-group>
<article-title>Why am I remembering this now? Predicting the occurrence of involuntary (spontaneous) episodic memories</article-title>
<source>Journal of Experimental Psychology: General</source>
<year>2013</year>
<volume>142</volume>
<issue>2</issue>
<fpage>426</fpage>
<lpage>444</lpage>
<pub-id pub-id-type="doi">10.1037/a0029128</pub-id>
<pub-id pub-id-type="pmid">22746701</pub-id>
</element-citation>
</ref>
<ref id="CR11">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brown</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>The perpetual music track: The phenomenon of constant musical imagery</article-title>
<source>Journal of Consciousness Studies</source>
<year>2006</year>
<volume>13</volume>
<issue>6</issue>
<fpage>43</fpage>
<lpage>62</lpage>
</element-citation>
</ref>
<ref id="CR12">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Byron</surname>
<given-names>TP</given-names>
</name>
<name>
<surname>Fowles</surname>
<given-names>LC</given-names>
</name>
</person-group>
<article-title>Repetition and recency increases involuntary musical imagery of previously unfamiliar songs</article-title>
<source>Psychology of Music</source>
<year>2013</year>
</element-citation>
</ref>
<ref id="CR13">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Edworthy</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Waring</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>The effects of music tempo and loudness level on treadmill exercise</article-title>
<source>Ergonomics</source>
<year>2006</year>
<volume>49</volume>
<issue>15</issue>
<fpage>1597</fpage>
<lpage>1610</lpage>
<pub-id pub-id-type="doi">10.1080/00140130600899104</pub-id>
<pub-id pub-id-type="pmid">17090506</pub-id>
</element-citation>
</ref>
<ref id="CR14">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ericsson</surname>
<given-names>KA</given-names>
</name>
<name>
<surname>Simon</surname>
<given-names>HA</given-names>
</name>
</person-group>
<source>Protocol analysis: Verbal reports as data</source>
<year>1993</year>
<publisher-loc>Cambridge, MA</publisher-loc>
<publisher-name>MIT Press</publisher-name>
</element-citation>
</ref>
<ref id="CR15">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Floridou</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Müllensiefen</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Environmental and mental conditions predicting the experience of involuntary musical imagery: An experience sampling method study</article-title>
<source>Consciousness and Cognition</source>
<year>2015</year>
<volume>33</volume>
<fpage>472</fpage>
<lpage>486</lpage>
<pub-id pub-id-type="doi">10.1016/j.concog.2015.02.012</pub-id>
<pub-id pub-id-type="pmid">25800098</pub-id>
</element-citation>
</ref>
<ref id="CR16">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Floridou</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Williamson</surname>
<given-names>VJ</given-names>
</name>
<name>
<surname>Müllensiefen</surname>
<given-names>D</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Cambouropoulos</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Tsougras</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Mavromatis</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Pastiadis</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Contracting earworms: The roles of personality and musicality</article-title>
<source>Proceedings of ICMPC-ESCOM 12</source>
<year>2012</year>
<publisher-loc>Thessaloniki</publisher-loc>
<publisher-name>Greece</publisher-name>
<fpage>302</fpage>
<lpage>310</lpage>
</element-citation>
</ref>
<ref id="CR17">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frieler</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Fischinger</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Schlemmer</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Lothwesen</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Jakubowski</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Müllensiefen</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Absolute memory for pitch: A comparative replication of Levitin’s 1994 study in six European labs</article-title>
<source>Musicae Scientiae</source>
<year>2013</year>
<volume>17</volume>
<issue>3</issue>
<fpage>334</fpage>
<lpage>349</lpage>
<pub-id pub-id-type="doi">10.1177/1029864913493802</pub-id>
</element-citation>
</ref>
<ref id="CR18">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gagnon</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
</person-group>
<article-title>Mode and tempo relative contributions to “happy- sad” judgements in equitone melodies</article-title>
<source>Cognition & Emotion</source>
<year>2003</year>
<volume>17</volume>
<issue>1</issue>
<fpage>25</fpage>
<lpage>40</lpage>
<pub-id pub-id-type="doi">10.1080/02699930302279</pub-id>
</element-citation>
</ref>
<ref id="CR19">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grahn</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Brett</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Rhythm and beat perception in motor areas of the brain</article-title>
<source>Journal of Cognitive Neuroscience</source>
<year>2007</year>
<volume>19</volume>
<issue>5</issue>
<fpage>893</fpage>
<lpage>906</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2007.19.5.893</pub-id>
<pub-id pub-id-type="pmid">17488212</pub-id>
</element-citation>
</ref>
<ref id="CR20">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grahn</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Rowe</surname>
<given-names>JB</given-names>
</name>
</person-group>
<article-title>Feeling the beat: Premotor and striatal interactions in musicians and non-musicians during beat perception</article-title>
<source>Journal of Neuroscience</source>
<year>2009</year>
<volume>29</volume>
<issue>23</issue>
<fpage>7540</fpage>
<lpage>7548</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2018-08.2009</pub-id>
<pub-id pub-id-type="pmid">19515922</pub-id>
</element-citation>
</ref>
<ref id="CR21">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Griffiths</surname>
<given-names>TD</given-names>
</name>
</person-group>
<article-title>Musical hallucinosis in acquired deafness Phenomenology and brain substrate</article-title>
<source>Brain</source>
<year>2000</year>
<volume>123</volume>
<issue>10</issue>
<fpage>2065</fpage>
<lpage>2076</lpage>
<pub-id pub-id-type="doi">10.1093/brain/123.10.2065</pub-id>
<pub-id pub-id-type="pmid">11004124</pub-id>
</element-citation>
</ref>
<ref id="CR22">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
</person-group>
<article-title>Perceived and imaged tempos of familiar songs</article-title>
<source>Music Perception</source>
<year>1988</year>
<volume>6</volume>
<fpage>193</fpage>
<lpage>202</lpage>
<pub-id pub-id-type="doi">10.2307/40285425</pub-id>
</element-citation>
</ref>
<ref id="CR23">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
</person-group>
<article-title>Memory for the absolute pitch of familiar songs</article-title>
<source>Memory & Cognition</source>
<year>1989</year>
<volume>17</volume>
<issue>5</issue>
<fpage>572</fpage>
<lpage>581</lpage>
<pub-id pub-id-type="doi">10.3758/BF03197080</pub-id>
<pub-id pub-id-type="pmid">2796742</pub-id>
</element-citation>
</ref>
<ref id="CR24">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Bartlett</surname>
<given-names>JC</given-names>
</name>
</person-group>
<article-title>The persistence of musical memories: A descriptive study of earworms</article-title>
<source>Music Perception</source>
<year>2011</year>
<volume>28</volume>
<fpage>425</fpage>
<lpage>432</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2011.28.4.425</pub-id>
</element-citation>
</ref>
<ref id="CR25">
<mixed-citation publication-type="other">Hektner, J.M., Schmidt, J.A., Csikszentmihalyi, M. (Eds.). (2006).
<italic>Experience Sampling Method: Measuring the Quality of Everyday Life</italic>
. Sage.</mixed-citation>
</ref>
<ref id="CR26">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Husain</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Thompson</surname>
<given-names>WF</given-names>
</name>
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
</person-group>
<article-title>Effects of musical tempo and mode on arousal, mood, and spatial abilities</article-title>
<source>Music Perception</source>
<year>2002</year>
<volume>20</volume>
<fpage>149</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2002.20.2.151</pub-id>
</element-citation>
</ref>
<ref id="CR27">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hyman</surname>
<given-names>IE</given-names>
<suffix>Jr</suffix>
</name>
<name>
<surname>Burland</surname>
<given-names>NK</given-names>
</name>
<name>
<surname>Duskin</surname>
<given-names>HM</given-names>
</name>
<name>
<surname>Cook</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>Roy</surname>
<given-names>CM</given-names>
</name>
<name>
<surname>McGrath</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Roundhill</surname>
<given-names>RF</given-names>
</name>
</person-group>
<article-title>Going Gaga: Investigating, creating, and manipulating the song stuck in my head</article-title>
<source>Applied Cognitive Psychology</source>
<year>2013</year>
<volume>27</volume>
<issue>2</issue>
<fpage>204</fpage>
<lpage>215</lpage>
<pub-id pub-id-type="doi">10.1002/acp.2897</pub-id>
</element-citation>
</ref>
<ref id="CR28">
<mixed-citation publication-type="other">Jakubowski, K., Farrugia, N., & Stewart, L. (2014). Capturing the speed of music in our heads: Developing methods for measuring the tempo of musical imagery. In: Jakubowski, K., Farrugia, N., Floridou, G.A., & Gagen, J., editors.
<italic>Proceedings of the 7th International Conference of Students of Systematic Musicology.</italic>
(London, UK)</mixed-citation>
</ref>
<ref id="CR29">
<mixed-citation publication-type="other">Jakubowski, K., Halpern, A. R., Grierson, M., & Stewart, L. (2015). The effect of exercise-induced arousal on preferred tempo for familiar melodies.
<italic>Psychonomic Bulletin & Review, 22</italic>
(2), 559–565. doi:10.3758/s13423-014-0687-1</mixed-citation>
</ref>
<ref id="CR30">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Juslin</surname>
<given-names>PN</given-names>
</name>
<name>
<surname>Laukka</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening</article-title>
<source>Journal of New Music Research</source>
<year>2004</year>
<volume>33</volume>
<issue>3</issue>
<fpage>217</fpage>
<lpage>238</lpage>
<pub-id pub-id-type="doi">10.1080/0929821042000317813</pub-id>
</element-citation>
</ref>
<ref id="CR31">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Killingsworth</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Gilbert</surname>
<given-names>DT</given-names>
</name>
</person-group>
<article-title>A wandering mind is an unhappy mind</article-title>
<source>Science</source>
<year>2010</year>
<volume>330</volume>
<fpage>932</fpage>
<pub-id pub-id-type="doi">10.1126/science.1192439</pub-id>
<pub-id pub-id-type="pmid">21071660</pub-id>
</element-citation>
</ref>
<ref id="CR32">
<mixed-citation publication-type="other">Kvavilashvili, L., & Anthony, S. (2012). When do Christmas songs pop into your mind? Testing a long-term priming hypothesis. Poster presented at the Annual Meeting of Psychonomic Society, Minneapolis, Minnesota, US</mixed-citation>
</ref>
<ref id="CR33">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kvavilashvili</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Mandler</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Out of one’s mind: A study of involuntary semantic memories</article-title>
<source>Cognitive Psychology</source>
<year>2004</year>
<volume>48</volume>
<fpage>47</fpage>
<lpage>94</lpage>
<pub-id pub-id-type="doi">10.1016/S0010-0285(03)00115-4</pub-id>
<pub-id pub-id-type="pmid">14654036</pub-id>
</element-citation>
</ref>
<ref id="CR34">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levitin</surname>
<given-names>DJ</given-names>
</name>
</person-group>
<article-title>Absolute memory for musical pitch – evidence from the production of learned melodies</article-title>
<source>Perception & Psychophysics</source>
<year>1994</year>
<volume>56</volume>
<issue>4</issue>
<fpage>414</fpage>
<lpage>423</lpage>
<pub-id pub-id-type="doi">10.3758/BF03206733</pub-id>
<pub-id pub-id-type="pmid">7984397</pub-id>
</element-citation>
</ref>
<ref id="CR35">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levitin</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Cook</surname>
<given-names>PR</given-names>
</name>
</person-group>
<article-title>Memory for musical tempo: Additional evidence that auditory memory is absolute</article-title>
<source>Perception & Psychophysics</source>
<year>1996</year>
<volume>58</volume>
<issue>6</issue>
<fpage>927</fpage>
<lpage>935</lpage>
<pub-id pub-id-type="doi">10.3758/BF03205494</pub-id>
<pub-id pub-id-type="pmid">8768187</pub-id>
</element-citation>
</ref>
<ref id="CR36">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liikkanen</surname>
<given-names>LA</given-names>
</name>
</person-group>
<article-title>Musical activities predispose to involuntary musical imagery</article-title>
<source>Psychology of Music</source>
<year>2012</year>
<volume>40</volume>
<issue>2</issue>
<fpage>236</fpage>
<lpage>256</lpage>
<pub-id pub-id-type="doi">10.1177/0305735611406578</pub-id>
</element-citation>
</ref>
<ref id="CR37">
<mixed-citation publication-type="other">Liikkanen, L.A., Jakubowski, K., & Toivanen, J. (in press). Catching earworms on Twitter: Using big data to study involuntary musical imagery.
<italic>Music Perception.</italic>
</mixed-citation>
</ref>
<ref id="CR38">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lucas</surname>
<given-names>BL</given-names>
</name>
<name>
<surname>Schubert</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
</person-group>
<article-title>Perception of emotion in sounded and imagined music</article-title>
<source>Music Perception</source>
<year>2010</year>
<volume>27</volume>
<issue>5</issue>
<fpage>399</fpage>
<lpage>412</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2010.27.5.399</pub-id>
</element-citation>
</ref>
<ref id="CR39">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mace</surname>
<given-names>JH</given-names>
</name>
</person-group>
<article-title>Episodic remembering creates access to involuntary conscious memory: Demonstrating involuntary recall on a voluntary recall task</article-title>
<source>Memory</source>
<year>2006</year>
<volume>1</volume>
<issue>8</issue>
<fpage>917</fpage>
<lpage>924</lpage>
<pub-id pub-id-type="doi">10.1080/09658210600759766</pub-id>
<pub-id pub-id-type="pmid">17077027</pub-id>
</element-citation>
</ref>
<ref id="CR40">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Margulis</surname>
<given-names>EH</given-names>
</name>
</person-group>
<source>On repeat: How music plays the mind</source>
<year>2014</year>
<publisher-loc>New York</publisher-loc>
<publisher-name>Oxford University Press</publisher-name>
</element-citation>
</ref>
<ref id="CR41">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McAuley</surname>
<given-names>JD</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>MR</given-names>
</name>
<name>
<surname>Holub</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Johnston</surname>
<given-names>HM</given-names>
</name>
<name>
<surname>Miller</surname>
<given-names>NS</given-names>
</name>
</person-group>
<article-title>The t ime of our lives: Live span development of timing and event tracking</article-title>
<source>Journal of Experimental Psychology: General</source>
<year>2006</year>
<volume>135</volume>
<fpage>348</fpage>
<lpage>367</lpage>
<pub-id pub-id-type="doi">10.1037/0096-3445.135.3.348</pub-id>
<pub-id pub-id-type="pmid">16846269</pub-id>
</element-citation>
</ref>
<ref id="CR42">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McVay</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Kane</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Kwapil</surname>
<given-names>TR</given-names>
</name>
</person-group>
<article-title>Tracking the train of thought from the laboratory into everyday life: An experience-sampling study of mind wandering across controlled and ecological contexts</article-title>
<source>Psychononomic Bulletin & Review</source>
<year>2009</year>
<volume>16</volume>
<fpage>857</fpage>
<lpage>863</lpage>
<pub-id pub-id-type="doi">10.3758/PBR.16.5.857</pub-id>
</element-citation>
</ref>
<ref id="CR43">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Müllensiefen</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Fry</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Jilka</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Williamson</surname>
<given-names>V</given-names>
</name>
</person-group>
<article-title>Individual differences predict patterns in spontaneous involuntary musical imagery</article-title>
<source>Music Perception</source>
<year>2014</year>
<volume>31</volume>
<issue>4</issue>
<fpage>323</fpage>
<lpage>338</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2014.31.4.323</pub-id>
</element-citation>
</ref>
<ref id="CR44">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>North</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Hargreaves</surname>
<given-names>DJ</given-names>
</name>
</person-group>
<article-title>Musical preferences during and after relaxation and exercise</article-title>
<source>The American Journal of Psychology</source>
<year>2000</year>
<volume>113</volume>
<issue>1</issue>
<fpage>43</fpage>
<lpage>67</lpage>
<pub-id pub-id-type="doi">10.2307/1423460</pub-id>
<pub-id pub-id-type="pmid">10742843</pub-id>
</element-citation>
</ref>
<ref id="CR45">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rowlands</surname>
<given-names>AV</given-names>
</name>
<name>
<surname>Schuna</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Stiles</surname>
<given-names>VH</given-names>
</name>
<name>
<surname>Tudor-Locke</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Cadence, peak vertical acceleration, and peak loading rate during ambulatory activities: Implications for activity prescription for bone health</article-title>
<source>Journal of Physical Activity and Health</source>
<year>2014</year>
<volume>11</volume>
<fpage>1291</fpage>
<lpage>1294</lpage>
<pub-id pub-id-type="doi">10.1123/jpah.2012-0402</pub-id>
<pub-id pub-id-type="pmid">24184713</pub-id>
</element-citation>
</ref>
<ref id="CR46">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Saarikallio</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Erkkilä</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>The role of music in adolescents’ mood regulation</article-title>
<source>Psychology of Music</source>
<year>2007</year>
<volume>35</volume>
<issue>1</issue>
<fpage>88</fpage>
<lpage>109</lpage>
<pub-id pub-id-type="doi">10.1177/0305735607068889</pub-id>
</element-citation>
</ref>
<ref id="CR47">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
<name>
<surname>von Scheve</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Emotional cues in American popular music: Five decades of the Top 40</article-title>
<source>Psychology of Aesthetics, Creativity, and the Arts</source>
<year>2012</year>
<volume>6</volume>
<fpage>196</fpage>
<lpage>203</lpage>
<pub-id pub-id-type="doi">10.1037/a0028024</pub-id>
</element-citation>
</ref>
<ref id="CR48">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schlagman</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kvavilashvili</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Involuntary autobiographical memories in and outside the laboratory: How different are they from voluntary autobiographical memories?</article-title>
<source>Memory & Cognition</source>
<year>2008</year>
<volume>36</volume>
<issue>5</issue>
<fpage>920</fpage>
<lpage>932</lpage>
<pub-id pub-id-type="doi">10.3758/MC.36.5.920</pub-id>
<pub-id pub-id-type="pmid">18630199</pub-id>
</element-citation>
</ref>
<ref id="CR49">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sloboda</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>O’Neill</surname>
<given-names>SA</given-names>
</name>
<name>
<surname>Ivaldi</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Functions of music in everyday life: An exploratory study using the Experience Sampling Method</article-title>
<source>Musicae Scientiae</source>
<year>2001</year>
<volume>5</volume>
<issue>1</issue>
<fpage>9</fpage>
<lpage>32</lpage>
</element-citation>
</ref>
<ref id="CR50">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smallwood</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Schooler</surname>
<given-names>JW</given-names>
</name>
</person-group>
<article-title>The restless mind</article-title>
<source>Psychological Bulletin</source>
<year>2006</year>
<volume>132</volume>
<fpage>946</fpage>
<lpage>958</lpage>
<pub-id pub-id-type="doi">10.1037/0033-2909.132.6.946</pub-id>
<pub-id pub-id-type="pmid">17073528</pub-id>
</element-citation>
</ref>
<ref id="CR51">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smallwood</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Schooler</surname>
<given-names>JW</given-names>
</name>
</person-group>
<article-title>The science of mind wandering: Empirically navigating the stream of consciousness</article-title>
<source>Annual Review of Psychology</source>
<year>2015</year>
<volume>66</volume>
<fpage>487</fpage>
<lpage>518</lpage>
<pub-id pub-id-type="doi">10.1146/annurev-psych-010814-015331</pub-id>
<pub-id pub-id-type="pmid">25293689</pub-id>
</element-citation>
</ref>
<ref id="CR52">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sowiński</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Dalla Bella</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Poor synchronization to the beat may result from deficient auditory-motor mapping</article-title>
<source>Neuropsychologia</source>
<year>2013</year>
<volume>51</volume>
<issue>10</issue>
<fpage>1952</fpage>
<lpage>1963</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2013.06.027</pub-id>
<pub-id pub-id-type="pmid">23838002</pub-id>
</element-citation>
</ref>
<ref id="CR53">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tarrant</surname>
<given-names>M</given-names>
</name>
<name>
<surname>North</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Hargreaves</surname>
<given-names>DJ</given-names>
</name>
</person-group>
<article-title>English and American adolescents' reasons for listening to music</article-title>
<source>Psychology of Music</source>
<year>2000</year>
<volume>28</volume>
<issue>2</issue>
<fpage>166</fpage>
<lpage>173</lpage>
<pub-id pub-id-type="doi">10.1177/0305735600282005</pub-id>
</element-citation>
</ref>
<ref id="CR54">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wammes</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Barušs</surname>
<given-names>I</given-names>
</name>
</person-group>
<article-title>Characteristics of spontaneous musical imagery</article-title>
<source>Journal of Consciousness Studies</source>
<year>2009</year>
<volume>16</volume>
<issue>1</issue>
<fpage>37</fpage>
<lpage>61</lpage>
</element-citation>
</ref>
<ref id="CR55">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Webster</surname>
<given-names>GD</given-names>
</name>
<name>
<surname>Weir</surname>
<given-names>CD</given-names>
</name>
</person-group>
<article-title>Emotional responses to music: Interactive effects of mode, texture, and tempo</article-title>
<source>Motivation and Emotion</source>
<year>2005</year>
<volume>29</volume>
<issue>1</issue>
<fpage>19</fpage>
<lpage>39</lpage>
<pub-id pub-id-type="doi">10.1007/s11031-005-4414-0</pub-id>
</element-citation>
</ref>
<ref id="CR56">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Williamson</surname>
<given-names>VJ</given-names>
</name>
<name>
<surname>Jilka</surname>
<given-names>SR</given-names>
</name>
</person-group>
<article-title>Experiencing earworms: An interview study of involuntary musical imagery</article-title>
<source>Psychology of Music</source>
<year>2013</year>
<volume>42</volume>
<issue>5</issue>
<fpage>653</fpage>
<lpage>670</lpage>
<pub-id pub-id-type="doi">10.1177/0305735613483848</pub-id>
</element-citation>
</ref>
<ref id="CR57">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Williamson</surname>
<given-names>VJ</given-names>
</name>
<name>
<surname>Jilka</surname>
<given-names>SR</given-names>
</name>
<name>
<surname>Fry</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Finkel</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Müllensiefen</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>How do earworms start? Classifying the everyday circumstances of involuntary musical imagery</article-title>
<source>Psychology of Music</source>
<year>2012</year>
<volume>40</volume>
<issue>3</issue>
<fpage>259</fpage>
<lpage>284</lpage>
<pub-id pub-id-type="doi">10.1177/0305735611418553</pub-id>
</element-citation>
</ref>
<ref id="CR58">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Williamson</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Liikkanen</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Jakubowski</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Sticky tunes: How do people react to involuntary musical imagery?</article-title>
<source>PLoS ONE</source>
<year>2014</year>
<volume>9</volume>
<issue>1</issue>
<pub-id pub-id-type="doi">10.1371/journal.pone.0086170</pub-id>
<pub-id pub-id-type="pmid">24497938</pub-id>
</element-citation>
</ref>
<ref id="CR59">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Williamson</surname>
<given-names>VJ</given-names>
</name>
<name>
<surname>Müllensiefen</surname>
<given-names>D</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Cambouropoulos</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Tsougras</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Mavromatis</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Pastiadis</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Earworms from three angles</article-title>
<source>Proceedings of ICMPC-ESCOM 12</source>
<year>2012</year>
<publisher-loc>Thessaloniki</publisher-loc>
<publisher-name>Greece</publisher-name>
<fpage>1124</fpage>
<lpage>1133</lpage>
</element-citation>
</ref>
<ref id="CR60">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Halpern</surname>
<given-names>AR</given-names>
</name>
</person-group>
<article-title>Mental concerts: Musical imagery and the auditory cortex</article-title>
<source>Neuron</source>
<year>2005</year>
<volume>47</volume>
<fpage>9</fpage>
<lpage>12</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2005.06.013</pub-id>
<pub-id pub-id-type="pmid">15996544</pub-id>
</element-citation>
</ref>
<ref id="CR61">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zelaznik</surname>
<given-names>HN</given-names>
</name>
<name>
<surname>Rosenbaum</surname>
<given-names>DA</given-names>
</name>
</person-group>
<article-title>Timing processes are correlated when tasks share a salient event</article-title>
<source>Journal of Experimental Psychology: Human Perception & Performance</source>
<year>2010</year>
<volume>36</volume>
<fpage>1565</fpage>
<lpage>1575</lpage>
<pub-id pub-id-type="pmid">20731516</pub-id>
</element-citation>
</ref>
<ref id="CR62">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Rowlands</surname>
<given-names>AV</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Hurst</surname>
<given-names>TL</given-names>
</name>
</person-group>
<article-title>Physical activity classification using the GENEA wrist-worn accelerometer</article-title>
<source>Medicine & Science in Sports & Exercise</source>
<year>2012</year>
<volume>44</volume>
<issue>4</issue>
<fpage>742</fpage>
<lpage>748</lpage>
<pub-id pub-id-type="doi">10.1249/MSS.0b013e31823bf95c</pub-id>
<pub-id pub-id-type="pmid">21988935</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Sarre/explor/MusicSarreV3/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000047 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000047 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Sarre
   |area=    MusicSarreV3
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:4624826
   |texte=   The speed of our mental soundtracks: Tracking the tempo of involuntary musical imagery in everyday life
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:26122757" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a MusicSarreV3 

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Sun Jul 15 18:16:09 2018. Site generation: Tue Mar 5 19:21:25 2024