Danse-thérapie et Parkinson

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Quantifying Auditory Temporal Stability in a Large Database of Recorded Music

Identifieur interne : 000017 ( Pmc/Corpus ); précédent : 000016; suivant : 000018

Quantifying Auditory Temporal Stability in a Large Database of Recorded Music

Auteurs : Robert J. Ellis ; Zhiyan Duan ; Ye Wang

Source :

RBID : PMC:4254286

Abstract

“Moving to the beat” is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical “energy”) in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file’s temporal structure (e.g., “average tempo”, “time signature”), none has sought to quantify the temporal stability of a series of detected beats. Such a method-a “Balanced Evaluation of Auditory Temporal Stability” (BEATS)–is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.


Url:
DOI: 10.1371/journal.pone.0110452
PubMed: 25469636
PubMed Central: 4254286

Links to Exploration step

PMC:4254286

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Quantifying Auditory Temporal Stability in a Large Database of Recorded Music</title>
<author>
<name sortKey="Ellis, Robert J" sort="Ellis, Robert J" uniqKey="Ellis R" first="Robert J." last="Ellis">Robert J. Ellis</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Duan, Zhiyan" sort="Duan, Zhiyan" uniqKey="Duan Z" first="Zhiyan" last="Duan">Zhiyan Duan</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Wang, Ye" sort="Wang, Ye" uniqKey="Wang Y" first="Ye" last="Wang">Ye Wang</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25469636</idno>
<idno type="pmc">4254286</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4254286</idno>
<idno type="RBID">PMC:4254286</idno>
<idno type="doi">10.1371/journal.pone.0110452</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">000017</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000017</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Quantifying Auditory Temporal Stability in a Large Database of Recorded Music</title>
<author>
<name sortKey="Ellis, Robert J" sort="Ellis, Robert J" uniqKey="Ellis R" first="Robert J." last="Ellis">Robert J. Ellis</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Duan, Zhiyan" sort="Duan, Zhiyan" uniqKey="Duan Z" first="Zhiyan" last="Duan">Zhiyan Duan</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Wang, Ye" sort="Wang, Ye" uniqKey="Wang Y" first="Ye" last="Wang">Ye Wang</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>“Moving to the beat” is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical “energy”) in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file’s temporal structure (e.g., “average tempo”, “time signature”), none has sought to quantify the temporal
<italic>stability</italic>
of a series of detected beats. Such a method-a “Balanced Evaluation of Auditory Temporal Stability” (BEATS)–is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Merker, Bh" uniqKey="Merker B">BH Merker</name>
</author>
<author>
<name sortKey="Madison, Gs" uniqKey="Madison G">GS Madison</name>
</author>
<author>
<name sortKey="Eckerdal, P" uniqKey="Eckerdal P">P Eckerdal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Winkler, I" uniqKey="Winkler I">I Winkler</name>
</author>
<author>
<name sortKey="Haden, Gp" uniqKey="Haden G">GP Háden</name>
</author>
<author>
<name sortKey="Ladinig, O" uniqKey="Ladinig O">O Ladinig</name>
</author>
<author>
<name sortKey="Sziller, I" uniqKey="Sziller I">I Sziller</name>
</author>
<author>
<name sortKey="Honing, H" uniqKey="Honing H">H Honing</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zentner, M" uniqKey="Zentner M">M Zentner</name>
</author>
<author>
<name sortKey="Eerola, T" uniqKey="Eerola T">T Eerola</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ellis, Rj" uniqKey="Ellis R">RJ Ellis</name>
</author>
<author>
<name sortKey="Jones, Mr" uniqKey="Jones M">MR Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Honing, H" uniqKey="Honing H">H Honing</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Janata, P" uniqKey="Janata P">P Janata</name>
</author>
<author>
<name sortKey="Tomic, St" uniqKey="Tomic S">ST Tomic</name>
</author>
<author>
<name sortKey="Haberman, Jm" uniqKey="Haberman J">JM Haberman</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Repp, Bh" uniqKey="Repp B">BH Repp</name>
</author>
<author>
<name sortKey="Su, Y H" uniqKey="Su Y">Y-H Su</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Karageorghis, Ci" uniqKey="Karageorghis C">CI Karageorghis</name>
</author>
<author>
<name sortKey="Priest, D L" uniqKey="Priest D">D-L Priest</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Karageorghis, Ci" uniqKey="Karageorghis C">CI Karageorghis</name>
</author>
<author>
<name sortKey="Priest, D L" uniqKey="Priest D">D-L Priest</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Karageorghis, Ci" uniqKey="Karageorghis C">CI Karageorghis</name>
</author>
<author>
<name sortKey="Terry, Pc" uniqKey="Terry P">PC Terry</name>
</author>
<author>
<name sortKey="Lane, Am" uniqKey="Lane A">AM Lane</name>
</author>
<author>
<name sortKey="Bishop, Dt" uniqKey="Bishop D">DT Bishop</name>
</author>
<author>
<name sortKey="Priest, D" uniqKey="Priest D">D Priest</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barnes, R" uniqKey="Barnes R">R Barnes</name>
</author>
<author>
<name sortKey="Jones, Mr" uniqKey="Jones M">MR Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jones, Mr" uniqKey="Jones M">MR Jones</name>
</author>
<author>
<name sortKey="Boltz, M" uniqKey="Boltz M">M Boltz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Large, Ew" uniqKey="Large E">EW Large</name>
</author>
<author>
<name sortKey="Jones, Mr" uniqKey="Jones M">MR Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salimpoor, Vn" uniqKey="Salimpoor V">VN Salimpoor</name>
</author>
<author>
<name sortKey="Benovoy, M" uniqKey="Benovoy M">M Benovoy</name>
</author>
<author>
<name sortKey="Longo, G" uniqKey="Longo G">G Longo</name>
</author>
<author>
<name sortKey="Cooperstock, Jr" uniqKey="Cooperstock J">JR Cooperstock</name>
</author>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, Wf" uniqKey="Thompson W">WF Thompson</name>
</author>
<author>
<name sortKey="Schellenberg, Eg" uniqKey="Schellenberg E">EG Schellenberg</name>
</author>
<author>
<name sortKey="Husain, G" uniqKey="Husain G">G Husain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Copeland, Bl" uniqKey="Copeland B">BL Copeland</name>
</author>
<author>
<name sortKey="Franks, Bd" uniqKey="Franks B">BD Franks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brownley, Ka" uniqKey="Brownley K">KA Brownley</name>
</author>
<author>
<name sortKey="Mcmurray, Rg" uniqKey="Mcmurray R">RG McMurray</name>
</author>
<author>
<name sortKey="Hackney, Ac" uniqKey="Hackney A">AC Hackney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnson, G" uniqKey="Johnson G">G Johnson</name>
</author>
<author>
<name sortKey="Otto, D" uniqKey="Otto D">D Otto</name>
</author>
<author>
<name sortKey="Clair, Aa" uniqKey="Clair A">AA Clair</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sneden Riley, J" uniqKey="Sneden Riley J">J Sneden-Riley</name>
</author>
<author>
<name sortKey="Waters, L" uniqKey="Waters L">L Waters</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guzman Garcia, A" uniqKey="Guzman Garcia A">A Guzmán-García</name>
</author>
<author>
<name sortKey="Hughes, Jc" uniqKey="Hughes J">JC Hughes</name>
</author>
<author>
<name sortKey="James, Ia" uniqKey="James I">IA James</name>
</author>
<author>
<name sortKey="Rochester, L" uniqKey="Rochester L">L Rochester</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Verghese, J" uniqKey="Verghese J">J Verghese</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morris, Me" uniqKey="Morris M">ME Morris</name>
</author>
<author>
<name sortKey="Iansek, R" uniqKey="Iansek R">R Iansek</name>
</author>
<author>
<name sortKey="Matyas, Ta" uniqKey="Matyas T">TA Matyas</name>
</author>
<author>
<name sortKey="Summers, Jj" uniqKey="Summers J">JJ Summers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thaut, Mh" uniqKey="Thaut M">MH Thaut</name>
</author>
<author>
<name sortKey="Mcintosh, Gc" uniqKey="Mcintosh G">GC McIntosh</name>
</author>
<author>
<name sortKey="Rice, Rr" uniqKey="Rice R">RR Rice</name>
</author>
<author>
<name sortKey="Miller, Ra" uniqKey="Miller R">RA Miller</name>
</author>
<author>
<name sortKey="Rathbun, J" uniqKey="Rathbun J">J Rathbun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Bruin, N" uniqKey="De Bruin N">N De Bruin</name>
</author>
<author>
<name sortKey="Doan, Jb" uniqKey="Doan J">JB Doan</name>
</author>
<author>
<name sortKey="Turnbull, G" uniqKey="Turnbull G">G Turnbull</name>
</author>
<author>
<name sortKey="Suchowersky, O" uniqKey="Suchowersky O">O Suchowersky</name>
</author>
<author>
<name sortKey="Bonfield, S" uniqKey="Bonfield S">S Bonfield</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pacchetti, C" uniqKey="Pacchetti C">C Pacchetti</name>
</author>
<author>
<name sortKey="Mancini, F" uniqKey="Mancini F">F Mancini</name>
</author>
<author>
<name sortKey="Aglieri, R" uniqKey="Aglieri R">R Aglieri</name>
</author>
<author>
<name sortKey="Fundar, C" uniqKey="Fundar C">C Fundarò</name>
</author>
<author>
<name sortKey="Martignoni, E" uniqKey="Martignoni E">E Martignoni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lim, I" uniqKey="Lim I">I Lim</name>
</author>
<author>
<name sortKey="Van Wegen, E" uniqKey="Van Wegen E">E Van Wegen</name>
</author>
<author>
<name sortKey="De Goede, C" uniqKey="De Goede C">C De Goede</name>
</author>
<author>
<name sortKey="Deutekom, M" uniqKey="Deutekom M">M Deutekom</name>
</author>
<author>
<name sortKey="Nieuwboer, A" uniqKey="Nieuwboer A">A Nieuwboer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rubinstein, Tc" uniqKey="Rubinstein T">TC Rubinstein</name>
</author>
<author>
<name sortKey="Giladi, N" uniqKey="Giladi N">N Giladi</name>
</author>
<author>
<name sortKey="Hausdorff, Jm" uniqKey="Hausdorff J">JM Hausdorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Dreu, Mj" uniqKey="De Dreu M">MJ De Dreu</name>
</author>
<author>
<name sortKey="Van Der Wilk, Asd" uniqKey="Van Der Wilk A">ASD van der Wilk</name>
</author>
<author>
<name sortKey="Poppe, E" uniqKey="Poppe E">E Poppe</name>
</author>
<author>
<name sortKey="Kwakkel, G" uniqKey="Kwakkel G">G Kwakkel</name>
</author>
<author>
<name sortKey="Van Wegen, Eeh" uniqKey="Van Wegen E">EEH van Wegen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spaulding, Sj" uniqKey="Spaulding S">SJ Spaulding</name>
</author>
<author>
<name sortKey="Barber, B" uniqKey="Barber B">B Barber</name>
</author>
<author>
<name sortKey="Colby, M" uniqKey="Colby M">M Colby</name>
</author>
<author>
<name sortKey="Cormack, B" uniqKey="Cormack B">B Cormack</name>
</author>
<author>
<name sortKey="Mick, T" uniqKey="Mick T">T Mick</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hausdorff, J" uniqKey="Hausdorff J">J Hausdorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hausdorff, Jm" uniqKey="Hausdorff J">JM Hausdorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hausdorff, Jm" uniqKey="Hausdorff J">JM Hausdorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schaafsma, Jd" uniqKey="Schaafsma J">JD Schaafsma</name>
</author>
<author>
<name sortKey="Giladi, N" uniqKey="Giladi N">N Giladi</name>
</author>
<author>
<name sortKey="Balash, Y" uniqKey="Balash Y">Y Balash</name>
</author>
<author>
<name sortKey="Bartels, Al" uniqKey="Bartels A">AL Bartels</name>
</author>
<author>
<name sortKey="Gurevich, T" uniqKey="Gurevich T">T Gurevich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hausdorff, Jm" uniqKey="Hausdorff J">JM Hausdorff</name>
</author>
<author>
<name sortKey="Rios, Da" uniqKey="Rios D">DA Rios</name>
</author>
<author>
<name sortKey="Edelberg, Hk" uniqKey="Edelberg H">HK Edelberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Davis, Jc" uniqKey="Davis J">JC Davis</name>
</author>
<author>
<name sortKey="Robertson, Mc" uniqKey="Robertson M">MC Robertson</name>
</author>
<author>
<name sortKey="Ashe, Mc" uniqKey="Ashe M">MC Ashe</name>
</author>
<author>
<name sortKey="Liu Ambrose, T" uniqKey="Liu Ambrose T">T Liu-Ambrose</name>
</author>
<author>
<name sortKey="Khan, Km" uniqKey="Khan K">KM Khan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bloem, Br" uniqKey="Bloem B">BR Bloem</name>
</author>
<author>
<name sortKey="Hausdorff, Jm" uniqKey="Hausdorff J">JM Hausdorff</name>
</author>
<author>
<name sortKey="Visser, Je" uniqKey="Visser J">JE Visser</name>
</author>
<author>
<name sortKey="Giladi, N" uniqKey="Giladi N">N Giladi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Delval, A" uniqKey="Delval A">A Delval</name>
</author>
<author>
<name sortKey="Krystkowiak, P" uniqKey="Krystkowiak P">P Krystkowiak</name>
</author>
<author>
<name sortKey="Delliaux, M" uniqKey="Delliaux M">M Delliaux</name>
</author>
<author>
<name sortKey="Blatt, J L" uniqKey="Blatt J">J-L Blatt</name>
</author>
<author>
<name sortKey="Derambure, P" uniqKey="Derambure P">P Derambure</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thaut, Mh" uniqKey="Thaut M">MH Thaut</name>
</author>
<author>
<name sortKey="Miltner, R" uniqKey="Miltner R">R Miltner</name>
</author>
<author>
<name sortKey="Lange, Hw" uniqKey="Lange H">HW Lange</name>
</author>
<author>
<name sortKey="Hurt, Cp" uniqKey="Hurt C">CP Hurt</name>
</author>
<author>
<name sortKey="Hoemberg, V" uniqKey="Hoemberg V">V Hoemberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thaut, Mh" uniqKey="Thaut M">MH Thaut</name>
</author>
<author>
<name sortKey="Mcintosh, Gc" uniqKey="Mcintosh G">GC McIntosh</name>
</author>
<author>
<name sortKey="Prassas, Sg" uniqKey="Prassas S">SG Prassas</name>
</author>
<author>
<name sortKey="Rice, Rr" uniqKey="Rice R">RR Rice</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thaut, Mh" uniqKey="Thaut M">MH Thaut</name>
</author>
<author>
<name sortKey="Leins, Ak" uniqKey="Leins A">AK Leins</name>
</author>
<author>
<name sortKey="Rice, Rr" uniqKey="Rice R">RR Rice</name>
</author>
<author>
<name sortKey="Argstatter, H" uniqKey="Argstatter H">H Argstatter</name>
</author>
<author>
<name sortKey="Kenyon, Gp" uniqKey="Kenyon G">GP Kenyon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De L Etoile, Sk" uniqKey="De L Etoile S">SK De l’ Etoile</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hurt, Cp" uniqKey="Hurt C">CP Hurt</name>
</author>
<author>
<name sortKey="Rice, Rr" uniqKey="Rice R">RR Rice</name>
</author>
<author>
<name sortKey="Mcintosh, Gc" uniqKey="Mcintosh G">GC McIntosh</name>
</author>
<author>
<name sortKey="Thaut, Mh" uniqKey="Thaut M">MH Thaut</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wittwer, Je" uniqKey="Wittwer J">JE Wittwer</name>
</author>
<author>
<name sortKey="Webster, Ke" uniqKey="Webster K">KE Webster</name>
</author>
<author>
<name sortKey="Hill, K" uniqKey="Hill K">K Hill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ehrle, N" uniqKey="Ehrle N">N Ehrlé</name>
</author>
<author>
<name sortKey="Samson, S" uniqKey="Samson S">S Samson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friberg, A" uniqKey="Friberg A">A Friberg</name>
</author>
<author>
<name sortKey="Sundberg, J" uniqKey="Sundberg J">J Sundberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grondin, S" uniqKey="Grondin S">S Grondin</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Getty, Dj" uniqKey="Getty D">DJ Getty</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jones, Mr" uniqKey="Jones M">MR Jones</name>
</author>
<author>
<name sortKey="Yee, W" uniqKey="Yee W">W Yee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schulze, H H" uniqKey="Schulze H">H-H Schulze</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Drake, C" uniqKey="Drake C">C Drake</name>
</author>
<author>
<name sortKey="Botte, Mc" uniqKey="Botte M">MC Botte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schulze, Hh" uniqKey="Schulze H">HH Schulze</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcauley, Jd" uniqKey="Mcauley J">JD McAuley</name>
</author>
<author>
<name sortKey="Miller, Ns" uniqKey="Miller N">NS Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, Ns" uniqKey="Miller N">NS Miller</name>
</author>
<author>
<name sortKey="Mcauley, Jd" uniqKey="Mcauley J">JD McAuley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grondin, S" uniqKey="Grondin S">S Grondin</name>
</author>
<author>
<name sortKey="Laforest, M" uniqKey="Laforest M">M Laforest</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sorkin, Rd" uniqKey="Sorkin R">RD Sorkin</name>
</author>
<author>
<name sortKey="Boggs, Gj" uniqKey="Boggs G">GJ Boggs</name>
</author>
<author>
<name sortKey="Brady, Sl" uniqKey="Brady S">SL Brady</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thaut, Mh" uniqKey="Thaut M">MH Thaut</name>
</author>
<author>
<name sortKey="Tian, B" uniqKey="Tian B">B Tian</name>
</author>
<author>
<name sortKey="Azimi Sadjadi, Mr" uniqKey="Azimi Sadjadi M">MR Azimi-Sadjadi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cope, Te" uniqKey="Cope T">TE Cope</name>
</author>
<author>
<name sortKey="Grube, M" uniqKey="Grube M">M Grube</name>
</author>
<author>
<name sortKey="Griffiths, Td" uniqKey="Griffiths T">TD Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pouliot, M" uniqKey="Pouliot M">M Pouliot</name>
</author>
<author>
<name sortKey="Grondin, S" uniqKey="Grondin S">S Grondin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schulze, H H" uniqKey="Schulze H">H-H Schulze</name>
</author>
<author>
<name sortKey="Cordes, A" uniqKey="Cordes A">A Cordes</name>
</author>
<author>
<name sortKey="Vorberg, D" uniqKey="Vorberg D">D Vorberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E Bigand</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Casey, Ma" uniqKey="Casey M">MA Casey</name>
</author>
<author>
<name sortKey="Veltkamp, R" uniqKey="Veltkamp R">R Veltkamp</name>
</author>
<author>
<name sortKey="Goto, M" uniqKey="Goto M">M Goto</name>
</author>
<author>
<name sortKey="Leman, M" uniqKey="Leman M">M Leman</name>
</author>
<author>
<name sortKey="Rhodes, C" uniqKey="Rhodes C">C Rhodes</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gouyon, F" uniqKey="Gouyon F">F Gouyon</name>
</author>
<author>
<name sortKey="Klapuri, A" uniqKey="Klapuri A">A Klapuri</name>
</author>
<author>
<name sortKey="Dixon, S" uniqKey="Dixon S">S Dixon</name>
</author>
<author>
<name sortKey="Alonso, M" uniqKey="Alonso M">M Alonso</name>
</author>
<author>
<name sortKey="Tzanetakis, G" uniqKey="Tzanetakis G">G Tzanetakis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klapuri, Ap" uniqKey="Klapuri A">AP Klapuri</name>
</author>
<author>
<name sortKey="Eronen, Aj" uniqKey="Eronen A">AJ Eronen</name>
</author>
<author>
<name sortKey="Astola, Jt" uniqKey="Astola J">JT Astola</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mckinney, Mf" uniqKey="Mckinney M">MF McKinney</name>
</author>
<author>
<name sortKey="Moelants, D" uniqKey="Moelants D">D Moelants</name>
</author>
<author>
<name sortKey="Davies, Mep" uniqKey="Davies M">MEP Davies</name>
</author>
<author>
<name sortKey="Klapuri, A" uniqKey="Klapuri A">A Klapuri</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bertin Mahieux, T" uniqKey="Bertin Mahieux T">T Bertin-Mahieux</name>
</author>
<author>
<name sortKey="Ellis, Dp" uniqKey="Ellis D">DP Ellis</name>
</author>
<author>
<name sortKey="Whitman, B" uniqKey="Whitman B">B Whitman</name>
</author>
<author>
<name sortKey="Lamere, P" uniqKey="Lamere P">P Lamere</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kaminskas, M" uniqKey="Kaminskas M">M Kaminskas</name>
</author>
<author>
<name sortKey="Ricci, F" uniqKey="Ricci F">F Ricci</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Botev, Zi" uniqKey="Botev Z">ZI Botev</name>
</author>
<author>
<name sortKey="Grotowski, Jf" uniqKey="Grotowski J">JF Grotowski</name>
</author>
<author>
<name sortKey="Kroese, Dp" uniqKey="Kroese D">DP Kroese</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ellis, Dp" uniqKey="Ellis D">DP Ellis</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, A" uniqKey="Wang A">A Wang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jang, Js" uniqKey="Jang J">JS Jang</name>
</author>
<author>
<name sortKey="Lee, Hr" uniqKey="Lee H">HR Lee</name>
</author>
<author>
<name sortKey="Yeh, Ch" uniqKey="Yeh C">CH Yeh</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tomic, St" uniqKey="Tomic S">ST Tomic</name>
</author>
<author>
<name sortKey="Janata, P" uniqKey="Janata P">P Janata</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mckinney, Mf" uniqKey="Mckinney M">MF McKinney</name>
</author>
<author>
<name sortKey="Moelants, D" uniqKey="Moelants D">D Moelants</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grondin, S" uniqKey="Grondin S">S Grondin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, Ad" uniqKey="Patel A">AD Patel</name>
</author>
<author>
<name sortKey="Iversen, Jr" uniqKey="Iversen J">JR Iversen</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25469636</article-id>
<article-id pub-id-type="pmc">4254286</article-id>
<article-id pub-id-type="publisher-id">PONE-D-14-32051</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0110452</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group>
<subject>Psychology</subject>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Computer and Information Sciences</subject>
<subj-group>
<subject>Data Visualization</subject>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Complementary and Alternative Medicine</subject>
<subj-group>
<subject>Music Therapy</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Neurology</subject>
<subj-group>
<subject>Neurodegenerative Diseases</subject>
<subj-group>
<subject>Movement Disorders</subject>
<subj-group>
<subject>Parkinson Disease</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Research and Analysis Methods</subject>
<subj-group>
<subject>Database and Informatics Methods</subject>
<subj-group>
<subject>Information Retrieval</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Social Sciences</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Quantifying Auditory Temporal Stability in a Large Database of Recorded Music</article-title>
<alt-title alt-title-type="running-head">Quantifying Auditory Temporal Stability</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Ellis</surname>
<given-names>Robert J.</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Duan</surname>
<given-names>Zhiyan</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wang</surname>
<given-names>Ye</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<addr-line>School of Computing, National University of Singapore, Singapore, Singapore</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Robin</surname>
<given-names>Donald A.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>University of Texas Health Science Center at San Antonio, Research Imaging Institute, United States of America</addr-line>
</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>wangye@comp.nus.edu.sg</email>
</corresp>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con">
<p>Conceived and designed the experiments: RJE YW. Analyzed the data: RJE ZD. Wrote the paper: RJE ZD YW.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>3</day>
<month>12</month>
<year>2014</year>
</pub-date>
<volume>9</volume>
<issue>12</issue>
<elocation-id>e110452</elocation-id>
<history>
<date date-type="received">
<day>17</day>
<month>7</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>10</day>
<month>9</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-year>2014</copyright-year>
<copyright-holder>Ellis et al</copyright-holder>
<license>
<license-p>This is an open-access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>“Moving to the beat” is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical “energy”) in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file’s temporal structure (e.g., “average tempo”, “time signature”), none has sought to quantify the temporal
<italic>stability</italic>
of a series of detected beats. Such a method-a “Balanced Evaluation of Auditory Temporal Stability” (BEATS)–is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.</p>
</abstract>
<funding-group>
<funding-statement>This research was supported by the National Research Foundation (NRF;
<ext-link ext-link-type="uri" xlink:href="http://www.nrf.gov.sg/">http://www.nrf.gov.sg/</ext-link>
) and managed through the multi-agency Interactive & Digital Media Programme Office (IDMPO;
<ext-link ext-link-type="uri" xlink:href="http://www.idm.sg/">http://www.idm.sg/</ext-link>
) hosted by the Media Development Authority of Singapore (MDA;
<ext-link ext-link-type="uri" xlink:href="http://www.mda.gov.sg/">http://www.mda.gov.sg/</ext-link>
) under the Centre of Social Media Innovations for Communities (COSMIC;
<ext-link ext-link-type="uri" xlink:href="http://cosmic.nus.edu.sg/">http://cosmic.nus.edu.sg/</ext-link>
). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<page-count count="24"></page-count>
</counts>
<custom-meta-group>
<custom-meta id="data-availability">
<meta-name>Data Availability</meta-name>
<meta-value>The authors confirm that all data underlying the findings are fully available without restriction. All raw data were obtained from the Million Song Dataset (
<ext-link ext-link-type="uri" xlink:href="http://labrosa.ee.columbia.edu/millionsong/">http://labrosa.ee.columbia.edu/millionsong/</ext-link>
). All code described in the present paper is available at (
<ext-link ext-link-type="uri" xlink:href="http://code.smcnus.org/">http://code.smcnus.org/</ext-link>
).</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
<notes>
<title>Data Availability</title>
<p>The authors confirm that all data underlying the findings are fully available without restriction. All raw data were obtained from the Million Song Dataset (
<ext-link ext-link-type="uri" xlink:href="http://labrosa.ee.columbia.edu/millionsong/">http://labrosa.ee.columbia.edu/millionsong/</ext-link>
). All code described in the present paper is available at (
<ext-link ext-link-type="uri" xlink:href="http://code.smcnus.org/">http://code.smcnus.org/</ext-link>
).</p>
</notes>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>With the proliferation of back-end warehouses of music metadata (e.g., AllMusic, Gracenote, Last.fm, MusicBrainz, The Echo Nest
<xref rid="pone.0110452-Wikipedia1" ref-type="bibr">[1]</xref>
), front-end online music stores (e.g., Amazon MP3, Google Play Music, iTunes, 7digital, Xbox Music
<xref rid="pone.0110452-Wikipedia2" ref-type="bibr">[2]</xref>
), and streaming music services (e.g., Deezer, MySpace Music, Napster, Rdio, Rhapsody, Spotify
<xref rid="pone.0110452-Wikipedia3" ref-type="bibr">[3]</xref>
) comes heretofore unparalleled opportunities to change the way music can be personalized for and delivered to target users with varying needs.</p>
<p>One need, shared by both rehabilitation professionals and exercise enthusiasts, is the ability to create music playlists which facilitate the synchronization of complex motor actions (e.g., walking) with an auditory beat. Auditory-motor synchronization has been deemed a human cultural universal
<xref rid="pone.0110452-Nettl1" ref-type="bibr">[4]</xref>
and a “diagnostic trait of our species”
<xref rid="pone.0110452-Merker1" ref-type="bibr">[5]</xref>
. Even infants show perceptual sensitivity to
<xref rid="pone.0110452-Winkler1" ref-type="bibr">[6]</xref>
and coordinated motor engagement with
<xref rid="pone.0110452-Zentner1" ref-type="bibr">[7]</xref>
musical rhythms. The phenomenon of auditory entrainment (the dynamic altering of an “internal” periodic process or action generated by an organism in the presence of a periodic acoustic stimulus) remains an active topic for the field of music cognition
<xref rid="pone.0110452-Ellis1" ref-type="bibr">[8]</xref>
<xref rid="pone.0110452-Repp1" ref-type="bibr">[14]</xref>
.</p>
<p>Auditory-motor synchronization has received particular interest in the context of preventive and rehabilitative physical exercise, with a number of advantages for participants (for recent summaries, see
<xref rid="pone.0110452-Karageorghis1" ref-type="bibr">[15]</xref>
<xref rid="pone.0110452-Karageorghis3" ref-type="bibr">[17]</xref>
): cognitively, by focusing attention (cf.
<xref rid="pone.0110452-Barnes1" ref-type="bibr">[18]</xref>
<xref rid="pone.0110452-Large2" ref-type="bibr">[20]</xref>
); motivationally, by increasing arousal (cf.
<xref rid="pone.0110452-Salimpoor1" ref-type="bibr">[21]</xref>
,
<xref rid="pone.0110452-Thompson1" ref-type="bibr">[22]</xref>
), endurance during a session (e.g.,
<xref rid="pone.0110452-Copeland1" ref-type="bibr">[23]</xref>
,
<xref rid="pone.0110452-Brownley1" ref-type="bibr">[24]</xref>
), and adherence across sessions (e.g.,
<xref rid="pone.0110452-Johnson1" ref-type="bibr">[25]</xref>
,
<xref rid="pone.0110452-SnedenRiley1" ref-type="bibr">[26]</xref>
); and socially, by enabling multiple individuals to participate and interact in a coordinated manner, as in partnered or group dancing (e.g.,
<xref rid="pone.0110452-GuzmnGarca1" ref-type="bibr">[27]</xref>
,
<xref rid="pone.0110452-Kattenstroth1" ref-type="bibr">[28]</xref>
).</p>
<p>A particularly successful application of auditory-motor synchronization paradigms has been for patients with Parkinson’s disease (PD), where it is referred to as “Rhythmic Auditory Stimulation” or “Rhythmic Auditory Cueing” (RAC). Although the facilitative effects of an external auditory cue on parkinsonian gait had been noted anecdotally since the 1940 s (e.g.,
<xref rid="pone.0110452-Martin1" ref-type="bibr">[30]</xref>
,
<xref rid="pone.0110452-VonWilzenben1" ref-type="bibr">[31]</xref>
), experimental work in the 1990 s (e.g.,
<xref rid="pone.0110452-Morris1" ref-type="bibr">[32]</xref>
,
<xref rid="pone.0110452-Thaut1" ref-type="bibr">[33]</xref>
) and subsequent multi-week clinical trials (e.g.,
<xref rid="pone.0110452-DeBruin1" ref-type="bibr">[34]</xref>
,
<xref rid="pone.0110452-Pacchetti1" ref-type="bibr">[35]</xref>
), systematic reviews
<xref rid="pone.0110452-Lim1" ref-type="bibr">[36]</xref>
,
<xref rid="pone.0110452-Rubinstein1" ref-type="bibr">[37]</xref>
, meta-analyses
<xref rid="pone.0110452-DeDreu1" ref-type="bibr">[38]</xref>
,
<xref rid="pone.0110452-Spaulding1" ref-type="bibr">[39]</xref>
, and evidence-based “best practice” treatment recommendations
<xref rid="pone.0110452-Keus1" ref-type="bibr">[40]</xref>
have all pointed towards RAC as a reliable and effective means of improving several features of gait: increasing cadence, stride length, and velocity (as reviewed in
<xref rid="pone.0110452-DeDreu1" ref-type="bibr">[38]</xref>
,
<xref rid="pone.0110452-Spaulding1" ref-type="bibr">[39]</xref>
); and decreasing gait
<italic>variability</italic>
(i.e., moment-to-moment fluctuations in step timing or step length; for comprehensive reviews, see
<xref rid="pone.0110452-Hausdorff1" ref-type="bibr">[41]</xref>
<xref rid="pone.0110452-Hausdorff3" ref-type="bibr">[43]</xref>
). A reduction in gait variability is of particular importance, as it is linked both retrospectively
<xref rid="pone.0110452-Schaafsma1" ref-type="bibr">[44]</xref>
and prospectively
<xref rid="pone.0110452-Hausdorff4" ref-type="bibr">[45]</xref>
with a reduced likelihood of falling, a costly event both financially (e.g.,
<xref rid="pone.0110452-Davis1" ref-type="bibr">[46]</xref>
) and psychologically (e.g.,
<xref rid="pone.0110452-Bloem1" ref-type="bibr">[47]</xref>
). Although less well-explored, RAC-mediated improvements in gait have also been noted for other neurological conditions, including Huntington’s disease
<xref rid="pone.0110452-Delval1" ref-type="bibr">[48]</xref>
,
<xref rid="pone.0110452-Thaut2" ref-type="bibr">[49]</xref>
, stroke
<xref rid="pone.0110452-Thaut3" ref-type="bibr">[50]</xref>
,
<xref rid="pone.0110452-Thaut4" ref-type="bibr">[51]</xref>
, spinal cord injury
<xref rid="pone.0110452-DelEtoile1" ref-type="bibr">[52]</xref>
, and traumatic brain injury
<xref rid="pone.0110452-Hurt1" ref-type="bibr">[53]</xref>
. (For a systematic review of this evidence, see
<xref rid="pone.0110452-Wittwer1" ref-type="bibr">[54]</xref>
.).</p>
<sec id="s1a">
<title>1. Physical Isochrony versus Perceptual Stability</title>
<p>A basic requirement for the music used in auditory−motor rehabilitation paradigms is it possesses a stable
<italic>tempo</italic>
(i.e., the rate at which beats or pulses are perceived to occur), thereby facilitating motor synchronization to the beat. This requirement is typically satisfied through the use of a digital metronome, either in isolation or superimposed on top of computer-generated music (e.g.,
<xref rid="pone.0110452-Thaut4" ref-type="bibr">[51]</xref>
), ensuring a precisely isochronous inter-beat interval (IBeI). However, a slightly more relaxed requirement could be proposed: that the sequence of IBeIs in the music stimulus need not be physically
<italic>isochronous</italic>
, but rather, be perceptually
<italic>stable</italic>
.</p>
<p>Systematic investigations of just-noticeable differences (JNDs) or other perceptual discrimination thresholds of anisochrony in auditory temporal sequences date back several decades (for reviews, see
<xref rid="pone.0110452-McAuley1" ref-type="bibr">[13]</xref>
,
<xref rid="pone.0110452-Repp1" ref-type="bibr">[14]</xref>
,
<xref rid="pone.0110452-Ehrl1" ref-type="bibr">[55]</xref>
<xref rid="pone.0110452-Grondin1" ref-type="bibr">[57]</xref>
). A wide range of stimuli has been explored:</p>
<p>(1) isolated time intervals (e.g.,
<xref rid="pone.0110452-Woodrow1" ref-type="bibr">[58]</xref>
,
<xref rid="pone.0110452-Getty1" ref-type="bibr">[59]</xref>
); (2) a single temporal perturbation within an isochronous (e.g.,
<xref rid="pone.0110452-Ehrl1" ref-type="bibr">[55]</xref>
,
<xref rid="pone.0110452-Friberg1" ref-type="bibr">[56]</xref>
,
<xref rid="pone.0110452-Jones3" ref-type="bibr">[60]</xref>
,
<xref rid="pone.0110452-Schulze1" ref-type="bibr">[61]</xref>
) or anisochronous (e.g.,
<xref rid="pone.0110452-Large2" ref-type="bibr">[20]</xref>
,
<xref rid="pone.0110452-Drake1" ref-type="bibr">[62]</xref>
) context; (3) a single tempo change between a pair of monotonic isochronous sequences (e.g.,
<xref rid="pone.0110452-Drake1" ref-type="bibr">[62]</xref>
<xref rid="pone.0110452-Miller1" ref-type="bibr">[65]</xref>
) or excerpts of computer-performed, quantized music
<xref rid="pone.0110452-Grondin2" ref-type="bibr">[66]</xref>
; (4) a pair of sequences, one isochronous and the other with Gaussian temporal “jitter”
<xref rid="pone.0110452-Sorkin1" ref-type="bibr">[67]</xref>
; (4) continuously cosine-modulated temporal intervals
<xref rid="pone.0110452-Thaut5" ref-type="bibr">[68]</xref>
; and (5) continuously accelerating or decelerating sequences (e.g.,
<xref rid="pone.0110452-Cope1" ref-type="bibr">[69]</xref>
<xref rid="pone.0110452-Schulze3" ref-type="bibr">[71]</xref>
). In general, JNDs for anisochrony decrease as the number of repetitions of a fixed temporal interval increases, and are higher overall within sequences in which temporal instability is present.</p>
<p>Although these conditions are well-controlled experimentally, they do not necessarily generalize to
<italic>performed</italic>
music. That is, absent from a digitally produced rhythm track, it would be expected that IBeIs in performed music exhibit some degree of “natural” variability in tempo (or, perhaps less pejoratively, “flexibility in tempo”). However, an important question that follows from this assumption–namely, “How much physical variability in an IBeI sequence results in the perceptual instability of tempo?”–has not been clearly asked, or answered. By contrast, studies seeking to quantify listeners’ perceptions of tonal stability (e.g.,
<xref rid="pone.0110452-Krumhansl1" ref-type="bibr">[72]</xref>
,
<xref rid="pone.0110452-Krumhansl2" ref-type="bibr">[73]</xref>
), or overall “musical stability” (e.g.,
<xref rid="pone.0110452-Bigand1" ref-type="bibr">[74]</xref>
) are more frequent.</p>
</sec>
<sec id="s1b">
<title>2. Beat Tracking and Tempo Extraction Algorithms</title>
<p>Accurately estimating the tempo of recorded music is an important topic within the field of music information retrieval (e.g.,
<xref rid="pone.0110452-Casey1" ref-type="bibr">[75]</xref>
<xref rid="pone.0110452-Ra1" ref-type="bibr">[77]</xref>
), and numerous algorithms have been developed to accomplish this (for summaries, see
<xref rid="pone.0110452-Gouyon1" ref-type="bibr">[78]</xref>
<xref rid="pone.0110452-Zapata1" ref-type="bibr">[81]</xref>
). Two broad categories of algorithms can be defined.
<italic>Beat tracking</italic>
algorithms return a time series of detected IBeIs along with a point estimate of “average” tempo in beats per minute (bpm).
<italic>Tempo extraction</italic>
algorithms return only the latter.</p>
<p>An important goal for beat tracking algorithms is to identify the temporal locations of each beat accurately (i.e., with respect to listeners’ “ground truth” perceptions) in the face of changes, drifts, fluctuations, or expressive variations in tempo within an audio file. The ability of a beat tracking algorithm to accurately
<italic>identify</italic>
the precise location of each beat in the face of a fluctuating temporal surface, however, is independent from its ability to meaningfully
<italic>quantify</italic>
how much temporal instability is actually present in the series of detected beats. Similarly, the ability of a tempo extraction algorithm to provide a point estimate (e.g., “tempo = 90 bpm”) that agrees with human perception (e.g., the average inter-tap interval when listeners were instructed to tap to the beat) reveals nothing about whether that estimate is stable across the entire audio file; and if not, over what time indices of the file that estimate
<italic>is</italic>
stable. (The accuracy of any point estimate is of course dependent upon the manner in which it was computed, as will be illustrated in Section 4 of the
<xref ref-type="sec" rid="s2">Methods</xref>
.).</p>
<p>To our knowledge, no current software algorithm, front-end interface, or back-end metadata service provider has offered any statistic explicitly designed to quantify the amount of
<italic>beat-to-beat temporal instability</italic>
within an IBeI series.</p>
<p>To address this issue, we expand upon our previous conference paper
<xref rid="pone.0110452-Cai1" ref-type="bibr">[82]</xref>
and present a novel analysis tool: a “Balanced Evaluation of Auditory Temporal Stability” (BEATS). BEATS itself does not perform beat tracking, but instead takes beat and barline (i.e., downbeat) onsets estimated by an independent beat tracking algorithm as input. For its initial release, BEATS has been optimized to the data structure of the “Million Song Dataset”
<xref rid="pone.0110452-BertinMahieux1" ref-type="bibr">[83]</xref>
(MSD;
<ext-link ext-link-type="uri" xlink:href="http://labrosa.ee.columbia.edu/millionsong/">http://labrosa.ee.columbia.edu/millionsong/</ext-link>
), a publicly available collection of computed acoustic features (e.g., individual beat and barline onsets; average tempo; estimated time signature) and music metadata (e.g., artist, album, and genre information) associated with nearly one million audio files processed using the proprietary “Analyze” algorithm
<xref rid="pone.0110452-Jehan1" ref-type="bibr">[84]</xref>
developed by The Echo Nest (
<ext-link ext-link-type="uri" xlink:href="http://www.echonest.com">www.echonest.com</ext-link>
). Compatibility with this data structure has scalable advantages, as the full Echo Nest library contains over 35 million analyzed audio files.</p>
<p>For each analyzed audio file, BEATS computes nine
<italic>Summary Statistics</italic>
that quantify some characteristic of the inter-beat or inter-bar interval data. These statistics can in turn serve as input to search engines for which tempo is a key query feature (e.g.,
<xref rid="pone.0110452-Casey1" ref-type="bibr">[75]</xref>
,
<xref rid="pone.0110452-Kaminskas1" ref-type="bibr">[85]</xref>
<xref rid="pone.0110452-Yi1" ref-type="bibr">[87]</xref>
).</p>
<p>By providing a more comprehensive quantitative analysis of both tempo
<italic>and</italic>
tempo stability, and incorporating those statistics as filterable features within an online resource (“iBEATS”, described in Section 3 of the Results), BEATS becomes a further step towards a solution that provides users with access to music that has been tailored to their (or their patients’) recreation or rehabilitation needs.</p>
</sec>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<sec id="s2a">
<title>1. Platform</title>
<p>BEATS is implemented in Matlab (version ≧7.8), supplemented by a few publicly available functions associated with the Million Song Dataset
<xref rid="pone.0110452-Ellis2" ref-type="bibr">[88]</xref>
and Matlab Central (
<ext-link ext-link-type="uri" xlink:href="http://www.mathworks.com/matlabcentral">http://www.mathworks.com/matlabcentral</ext-link>
).</p>
</sec>
<sec id="s2b">
<title>2. Raw Data</title>
<p>For each metadata file, BEATS pulls four Echo Nest fields:
<monospace>beats_start</monospace>
and
<monospace>bars_start</monospace>
(the estimated onsets of successive beats and barlines, respectively); and
<monospace>tempo</monospace>
and
<monospace>time_signature</monospace>
(point estimates directly provided by Echo Nest). Next,
<monospace>beats_start</monospace>
and
<monospace>bars_start</monospace>
are transformed into an inter-beat interval inter-bar interval series, respectively, by taking the first-order difference of each timestamp vector.</p>
</sec>
<sec id="s2c">
<title>3. Initialization Thresholds</title>
<p>BEATS requires the user to specify three Initialization Thresholds:</p>
<list list-type="order">
<list-item>
<p>“Local Stability Threshold”, θ
<sub>Local</sub>
: a percentage value (default = 5.0%) used to define the upper bound of what is deemed temporally stable at the level of individual and successive IBeIs (detailed below).</p>
</list-item>
<list-item>
<p>“Run Duration Threshold”, θ
<sub>Run</sub>
: the minimum duration (default = 10 s) of a set of adjacent IBeIs (i.e., a “Run”) that all fall below θ
<sub>Local</sub>
.</p>
</list-item>
<list-item>
<p>“Gap Duration Threshold”, θ
<sub>Gap</sub>
: the maximum duration (default = 2.5 s) between the last element of Run
<italic>
<sub>j</sub>
</italic>
and the first element of Run
<italic>
<sub>j</sub>
</italic>
<sub>+1</sub>
.</p>
</list-item>
</list>
</sec>
<sec id="s2d">
<title>4. Internal Calculations</title>
<p>The first statistic calculated by BEATS is an estimate of an IBeI series’ central tendency, or
<italic>location</italic>
, λ. Common measures of λ include the mean, median, and mode. However, obtaining an optimal value for λ can be more complicated than simply taking the mean, median, or mode of a series. Consider the hypothetical 80-element IBeI series
<bold>S</bold>
shown in
<xref ref-type="fig" rid="pone-0110452-g001">
<bold>Figure 1A</bold>
</xref>
, which exhibits two tempo changes (at the 21st and 41st elements). Visual inspection of the Matlab-derived mean, median, and mode reveals that all are clearly inadequate measures of the “true” central tendency of
<bold>S</bold>
(i.e., ≈ 1.0).</p>
<fig id="pone-0110452-g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0110452.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Illustrating different central tendency statistics.</title>
<p>(A) A hypothetical IBeI series comprised of three distinct tempo sections: 20 IBeIs with a mean of 0.5 s (i.e., 120 bpm), followed by 20 IBeIs with a mean of 0.75 s (80 bpm), followed by 40 IBeIs with a mean of 1.00 s (60 bpm). The mean, median, and mode of the data fail to provide an adequate measure of central tendency. (B) Kernel density estimation (KDE) of the distribution of IBeI values in
<xref ref-type="fig" rid="pone-0110452-g001">Figure 1A</xref>
, using various bandwidth values. The most accurate measure of central tendency was obtained using adaptive Gaussian KDE
<xref rid="pone.0110452-Botev1" ref-type="bibr">[90]</xref>
,
<xref rid="pone.0110452-Botev2" ref-type="bibr">[91]</xref>
.</p>
</caption>
<graphic xlink:href="pone.0110452.g001"></graphic>
</fig>
<p>One widely used method of obtaining a more accurate value for the central tendency of a dataset (specifically, the mode) has been the use of kernel density estimation (KDE) techniques, first proposed in the 1960 s
<xref rid="pone.0110452-Parzen1" ref-type="bibr">[89]</xref>
<xref ref-type="fig" rid="pone-0110452-g001">
<bold>Figure 1B</bold>
</xref>
plots the estimated probability density of the distribution of values in
<bold>S</bold>
, using various values for the kernel
<italic>bandwidth</italic>
(i.e., the smoothing parameter). The mode of
<bold>S</bold>
is defined simply: the
<italic>x</italic>
-axis value at which the highest probability density (
<italic>y</italic>
-axis) occurs. As can be appreciated from
<xref ref-type="fig" rid="pone-0110452-g001">Figure 1B</xref>
, the bandwidth plays a strong role in the resultant mode: too narrow, and the mode will default to its most frequent value; too wide, and the density estimate will “smooth over” distinct features (in this case, time-varying features) within the data set, such as the presence of multiple modes.</p>
<p>To circumvent this problem, and thus provide a more “representative” value for λ, BEATS makes use of a recent implementation of adaptive (variable-bandwidth) Gaussian KDE
<xref rid="pone.0110452-Botev1" ref-type="bibr">[90]</xref>
,
<xref rid="pone.0110452-Botev2" ref-type="bibr">[91]</xref>
, which optimizes the bandwidth so as to return a valid density estimate even in the presence of multiple modes. Using this approach (shown as the blue density estimate in
<xref ref-type="fig" rid="pone-0110452-g001">Figure 1B</xref>
), λ is calculated as 1.0002: a far more representative value.</p>
<p>Having determined λ, the longest “Stable Segment” within the IBeI series is then identified. The first step in this process is to identify the locations of “stable” IBeIs, where stability is operationalized in two ways: stability of each IBeI relative to λ, and stability between successive IBeIs. The first type of stability is quantified via a “percentage deviation from λ” (PDL) transformation:
<disp-formula id="pone.0110452.e001">
<graphic xlink:href="pone.0110452.e001.jpg" position="anchor" orientation="portrait"></graphic>
<label>(1)</label>
</disp-formula>
</p>
<p>The second type of stability is quantified via a “successive percentage change” (SPC) transformation between IBeIs
<italic>i</italic>
and
<italic>i</italic>
+1:
<disp-formula id="pone.0110452.e002">
<graphic xlink:href="pone.0110452.e002.jpg" position="anchor" orientation="portrait"></graphic>
<label>(2)</label>
</disp-formula>
</p>
<p>(Both
<bold>S</bold>
<sub>PDL</sub>
and
<bold>S</bold>
<sub>SPC</sub>
are expressed as relative percentages so as to facilitate comparisons across IBeI sequences in different tempo ranges.) These two equations are used in sequence to identify the location of temporally stable IBeIs. First, an
<italic>initial</italic>
determination of stability is made for each IBeI:
<disp-formula id="pone.0110452.e003">
<graphic xlink:href="pone.0110452.e003.jpg" position="anchor" orientation="portrait"></graphic>
<label>(3)</label>
</disp-formula>
where “1” indicates a stable IBeI relative to λ. Next, for all pairs of elements {
<italic>i</italic>
,
<italic>i</italic>
+1} for which
<bold>S</bold>
<sub>Stable,
<italic>i</italic>
</sub>
has a value of {1, 1},
<bold>S</bold>
<sub>Stable,
<italic>i</italic>
+1</sub>
is then
<italic>revised</italic>
:</p>
<p>
<disp-formula id="pone.0110452.e004">
<graphic xlink:href="pone.0110452.e004.jpg" position="anchor" orientation="portrait"></graphic>
<label>(4)</label>
</disp-formula>
A “Run” (i.e., a string of 1 s) within
<bold>S</bold>
<sub>Stable</sub>
thus indicates both temporal stability relative to λ as well as between successive IBeIs; a “Gap” (i.e., a string of one or more 0 s) indicates temporal instability. The Stable Segment is defined as the longest consecutive sequence of adjacent Runs-plus-Gaps (e.g., {Run
<italic>
<sub>j</sub>
</italic>
, Gap
<italic>
<sub>j</sub>
</italic>
, Run
<italic>
<sub>j</sub>
</italic>
<sub>+1</sub>
}), where each Run has a duration ≧ θ
<sub>Run</sub>
and each Gap a duration ≤ θ
<sub>Gap</sub>
.</p>
</sec>
<sec id="s2e">
<title>E. Summary Statistics</title>
<p>For each file, BEATS computes nine Summary Statistics for the Stable Segment (referenced throughout the text as “A” through “I”).</p>
<list list-type="alpha-upper">
<list-item>
<p>“Stable Duration”: the duration (in seconds) between the first and last timestamps of the Stable Segment.</p>
</list-item>
<list-item>
<p>“Stable Percentage”: the Stable Duration as a percentage of the duration between the first and last timestamps of the IBeI series.</p>
</list-item>
<list-item>
<p>“Run Percentage”: the percentage of the Stable Duration comprised of Runs. For example, if a Stable Segment was comprised of two Runs (each 30 s in duration) separated by a single Gap (2 s in duration), then the Run Percentage is 96.8%.</p>
</list-item>
<list-item>
<p>“Estimated Tempo”: the central tendency (λ) of the entire IBeI series, converted to beats per minute (e.g., a λ of 1.0001 s yields an Estimated Tempo of 59.994 bpm).</p>
</list-item>
<list-item>
<p>“Estimated Tempo Mismatch” (ETM): the signed percentage error of the tempo estimated by BEATS (
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e005.jpg"></inline-graphic>
</inline-formula>
, defined above) relative to the tempo estimate calculated by Echo Nest (
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e006.jpg"></inline-graphic>
</inline-formula>
; i.e., the
<monospace>tempo</monospace>
statistic queried from the MSD):
<disp-formula id="pone.0110452.e007">
<graphic xlink:href="pone.0110452.e007.jpg" position="anchor" orientation="portrait"></graphic>
<label>(5)</label>
</disp-formula>
</p>
</list-item>
<list-item>
<p>“Estimated Meter”: a more precise operationalization of meter than the usual integer value (e.g., “4 beats-per-bar”). Specifically, for a Stable Segment with a bar timestamp series {
<italic>r
<sub>i</sub>
</italic>
,
<italic>r
<sub>i+1</sub>
</italic>
,
<italic></italic>
} and beat timestamp series {
<italic>b
<sub>j</sub>
</italic>
,
<italic>b
<sub>j+1</sub>
</italic>
,
<italic></italic>
}, let
<italic>n
<sub>i</sub>
</italic>
be the number of beat timestamps for which
<italic>r
<sub>i</sub>
≤ b
<sub>j</sub>
<sub>i+1</sub>
</italic>
. Estimated Meter is then taken as the mean of all
<italic>n
<sub>i</sub>
</italic>
. Only in the case when all
<italic>n
<sub>i</sub>
</italic>
have the same value will a true integer result (e.g.,
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e008.jpg"></inline-graphic>
</inline-formula>
), providing an easy way to identify audio files that have an unstable meter within the Stable Segment.</p>
</list-item>
<list-item>
<p>“Maximum of Percentage Deviations from λ” (PDL
<sub>max</sub>
): The absolute value of the largest PDL (
<xref ref-type="disp-formula" rid="pone.0110452.e001">Eq. 1</xref>
) across all Runs.</p>
</list-item>
<list-item>
<p>“Maximum of Successive Percentage Changes” (SPC
<sub>max</sub>
): The absolute value of the largest SPC (
<xref ref-type="disp-formula" rid="pone.0110452.e002">Eq. 2</xref>
) across all Runs. Although θ
<sub>Local</sub>
sets the maximum tolerated amount of instability in PDL and SPC
<italic>a priori</italic>
, the largest
<italic>observed</italic>
PDL and SPC may in fact be smaller.</p>
</list-item>
<list-item>
<p>“Maximum of Percentage Tempo Drift” (PTD
<sub>max</sub>
): the largest observed “short term drift” in tempo across all Runs, expressed as a percentage, and calculated as follows. First, within each Run, a series of 10-s windows is defined, with each successive window overlapping half of the previous window. Second, within each window, the best-fitting slope (i.e., linear tempo drift) through the IBeIs is found using least-squares linear regression Matlab’s
<monospace>polyfit</monospace>
(highlighted in red in the two example IBeI series shown in
<xref ref-type="fig" rid="pone-0110452-g002">Figure 2</xref>
). Third, for each calculated regression slope, the
<italic>y</italic>
-axis endpoints within window
<italic>w</italic>
are found, and expressed as percentage change (i.e., a “percentage of tempo drift”, PTD). In
<xref ref-type="fig" rid="pone-0110452-g002">Figure 2A</xref>
, for example, the best-fit slope in the 0 to 10 s window rises from
<italic>y</italic>
 = .4997 to
<italic>y</italic>
 = .5029 (yielding PTD = 0.65%), whereas the best-fit slope in the 10 to 20 s window falls from
<italic>y</italic>
 = .5064 to
<italic>y</italic>
 = .4897 (yielding PTD = −3.30%). Finally, PTD
<sub>max</sub>
is taken as the largest absolute value of all PTDs across all Runs. For the IBeI series in
<xref ref-type="fig" rid="pone-0110452-g002">Figure 2A</xref>
, PTD
<sub>max</sub>
 = 3.30%.</p>
</list-item>
</list>
<fig id="pone-0110452-g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0110452.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Illustrating the relationship between three measures of temporal instability.</title>
<p>Two permutations of the same set of IBeIs are presented; both have identical central tendency and PDL
<sub>max</sub>
statistics. The IBeI series in (A) exhibits temporal dependency, with gradual transitions from IBeI to IBeI. The IBeI series in (B) exhibits a more stochastic pattern of IBeI transitions. These differences in temporal structure are reflected in the SPC
<sub>max</sub>
and PTD
<sub>max</sub>
statistics.</p>
</caption>
<graphic xlink:href="pone.0110452.g002"></graphic>
</fig>
<p>Importantly, PDL
<sub>max</sub>
, SPC
<sub>max</sub>
, and PTD
<sub>max</sub>
quantify partially independent aspects of temporal instability. The IBeI series in
<xref ref-type="fig" rid="pone-0110452-g002">Figure 2B</xref>
is in fact simply a random reshuffling of the IBeI series in
<xref ref-type="fig" rid="pone-0110452-g002">Figure 2A</xref>
, meaning that the two have identical means ( = 0.50), standard deviations ( = 0.005), and PDL
<sub>max</sub>
( = 2.69%) statistics. Their SPC
<sub>max</sub>
and PTD
<sub>max</sub>
statistics, however, are markedly different (by a factor of 4 and 3, respectively). Quantifying these three aspects of temporal instability provides a richer description of each IBeI sequence, as well as how IBeI sequences differ from one another.</p>
</sec>
<sec id="s2f">
<title>F. Implementation</title>
<p>To illustrate its various features, BEATS was run on the full Million Song Dataset using Initialization Thresholds of θ
<sub>Local</sub>
 = 5.0%, θ
<sub>Run</sub>
 = 10 s, and θ
<sub>Gap</sub>
 = 2.5 s. (The values of these thresholds, especially θ
<sub>Local</sub>
, should be considered
<italic>illustrative</italic>
rather than
<italic>prescriptive</italic>
; more will be said about this point in Section 1 of the Discussion.).</p>
</sec>
</sec>
<sec id="s3">
<title>Results</title>
<sec id="s3a">
<title>1. Individual Examples</title>
<p>
<xref ref-type="fig" rid="pone-0110452-g003">
<bold>Figure 3</bold>
</xref>
presents four individual MSD audio files that visually highlight one or more of the Summary Statistics. (All files had an Estimated Meter = 
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e009.jpg"></inline-graphic>
</inline-formula>
.) Recordings of each audio file are available for listening via a Spotify URL.</p>
<fig id="pone-0110452-g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0110452.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Four examples from the MSD illustrating the calculated Summary Statistics.</title>
<p>IBeIs (
<italic>y</italic>
-axis) are plotted as a function of real time (
<italic>x</italic>
-axis). The central tendency (λ) of each IBeI distribution is obtained via adaptive KDE (right subpanel), plotted in blue. Slopes used to calculate PTD
<sub>max</sub>
statistics are highlighted in red. The final Stable Segment (bridged across Gaps) is highlighted in green circles. Spotify URLs can be suffixed to
<ext-link ext-link-type="uri" xlink:href="https://play.spotify.com/track/">https://play.spotify.com/track/</ext-link>
for listening.</p>
</caption>
<graphic xlink:href="pone.0110452.g003"></graphic>
</fig>
<p>In
<xref ref-type="fig" rid="pone-0110452-g003">Figure 3A</xref>
, the entire audio file consists of a repeating (looped) four-beat percussion riff. The IBeI series is highly regular, with nearly all successive IBeI differences being less than 2 ms. This audio file represents an “ideal” case: near-perfect isochrony from the first beat to the last, yielding very low values for the three Summary Statistics that quantify IBeI variability (PDL
<sub>max</sub>
, SPC
<sub>max</sub>
, and PTD
<sub>max</sub>
), as well as excellent agreement between BEATS’ Estimated Tempo and Echo Nest’s
<monospace>tempo</monospace>
estimate (a difference of less than one-tenth of 1%).</p>
<p>In
<xref ref-type="fig" rid="pone-0110452-g003">Figure 3B</xref>
, the audio file begins with a complex rhythm, to which a simple drum-and-cymbal rhythm (at approximately 150 bpm) at a higher frequency (pitch) and intensity (loudness) is added at the 13-s mark. This simple rhythm is removed at the 110-s mark, reintroduced at the 116-s mark, and remains in place until the end of the file at 199 s. It is this simple rhythm that drives the output of the Analyze beat detection algorithm. As such, the 94-s Stable Segment (identified by BEATS) is the longer of the two segments at that same tempo (the other being roughly 83 s). Within the Stable Segment, most IBeIs differ by only a few ms (similar to
<xref ref-type="fig" rid="pone-0110452-g003">Figure 3A</xref>
), yielding low values for the IBeI variability statistics. However, although the estimates of tempo by BEATS and Echo Nest again show excellent agreement, using the
<italic>entire</italic>
audio file in a motor synchronization paradigm (rather than just the Stable Segment) may prove challenging for some patients.</p>
<p>In
<xref ref-type="fig" rid="pone-0110452-g003">Figure 3C</xref>
, the Stable Segment is comprised of four distinct Runs bridged across three Gaps (at roughly 40 s, 77 s, and 160 s) that emerge as a consequence of unexpected syncopations in the voice (Gaps 1 and 2) or electric bass (Gap 3). PDL
<sub>max</sub>
and SPC
<sub>max</sub>
both have higher values than in the previous two examples, which might be expected as this audio file was recorded in a studio with session musicians (as opposed to synthesized on a computer, like the excerpts highlighted in
<xref ref-type="fig" rid="pone-0110452-g002">Figures 2A and 2B</xref>
)
<xref rid="pone.0110452-AllMusic1" ref-type="bibr">[92]</xref>
.</p>
<p>In
<xref ref-type="fig" rid="pone-0110452-g003">Figure 3D</xref>
, the
<italic>accelerando</italic>
for which the piece is famous is clearly visible in the IBeI plot; such an acoustic feature would, in theory, make for poor temporal stability. BEATS, however, was able to identify a 61-s Stable Segment where the tempo accelerated in less than 5% increments (as quantified by the “Maximum of Percentage Tempo Drift” statistic, PTD
<sub>max</sub>
).</p>
<p>Another feature of this IBeI series is notable. Although the
<italic>perceptual</italic>
tempo of the audio file continues to accelerate throughout its second half, the detected IBeI series (which had been tracking the quarter-note pulse) dramatically shifts from 0.42 s (at the 113-s mark) to 0.74 s (by the 116-s mark). Listening to the recording itself reveals a prominent change in timbre and intensity with the introduction of the chorus (and its strong accents on
<italic>alternating</italic>
quarter notes) at this point in the musical score (i.e., bar 49 in
<xref rid="pone.0110452-Grieg1" ref-type="bibr">[93]</xref>
). Although this musical event falls outside the Stable Segment, it raises an important point about the intimate dependency of BEATS on the beat tracking algorithm from which it takes its input data–a point detailed further in Section 1 of the Discussion.</p>
</sec>
<sec id="s3b">
<title>2. Static Presentation of Summary Statistics</title>
<p>
<xref ref-type="fig" rid="pone-0110452-g004">
<bold>Figure 4</bold>
</xref>
presents a histogram (with log
<sub>2</sub>
spacing along the
<italic>y</italic>
-axis for visual clarity) for each Outcome Statistic. The number of files actually summarized in
<xref ref-type="fig" rid="pone-0110452-g004">Figure 4</xref>
is 971,278; the remaining files (i.e., 2.9% of the full MSD) did not have an identifiable Stable Segment which satisfied the Run Duration Threshold (i.e., were found to have less than 10 s of temporal stability).</p>
<fig id="pone-0110452-g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0110452.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Histogram summaries of the nine Summary Statistics across the Million Song Dataset (
<italic>N</italic>
 = 971,278), using log
<sub>2</sub>
scaling along the
<italic>y</italic>
-axis to enhance visibility.</title>
<p>Labels “A” through “I” correspond to the order in which Summary Statistics were defined in Section E of the
<xref ref-type="sec" rid="s2">Methods</xref>
.</p>
</caption>
<graphic xlink:href="pone.0110452.g004"></graphic>
</fig>
<p>An immediate question of interest concerns the agreement in “average” tempo as estimated by BEATS (
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e010.jpg"></inline-graphic>
</inline-formula>
) and Echo Nest (
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e011.jpg"></inline-graphic>
</inline-formula>
). As revealed in
<xref ref-type="fig" rid="pone-0110452-g004">
<bold>Figure 4E</bold>
</xref>
, this match was generally quite high: 95% of all ETM percentage values fell within the interval [–2.20, 1.69]. That a vast majority of
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e012.jpg"></inline-graphic>
</inline-formula>
values differed from their
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e013.jpg"></inline-graphic>
</inline-formula>
counterparts by less than the just-noticeable-difference for changes in tempo in isochronous IBeI sequences (cf. Section 1 of the Introduction) would seem, at first blush, to eliminate the need for BEATS entirely. Critically, however, agreement in terms of “average” tempo is only one piece of the puzzle, as it does address whether (and over what portion of the audio file) that tempo is
<italic>stable</italic>
–thus making that value statistically valid and experimentally useful.</p>
<p>In fact, Stable Percentage values (i.e., the percentage of each file’s duration that consisted of temporally stable of Runs that were separated by temporally unstable Gaps of no more than 2.5 s) varied widely across the MSD, as revealed in
<xref ref-type="fig" rid="pone-0110452-g004">
<bold>Figure 4B</bold>
</xref>
. Less than 22% of MSD files (
<italic>N</italic>
 = 214,540) yielded a Stable Percentage = 100 (i.e., indicating temporal stability from the first detected beat to the last). This result has important consequences for “unsupervised” tempo-based playlist generation algorithms (e.g.,
<xref rid="pone.0110452-DelEtoile1" ref-type="bibr">[52]</xref>
<xref rid="pone.0110452-Wittwer1" ref-type="bibr">[54]</xref>
): only a fraction of audio files actually
<italic>maintain</italic>
their nominal tempo (i.e., the their Echo Nest
<monospace>tempo</monospace>
estimate) over their entire duration.</p>
<p>By contrast, if a user simply requires music that is temporally stable over a
<italic>minimum</italic>
duration (say, 90 s; useful for short gait training episodes or bouts of rhythmic exercise between rest periods) rather than its
<italic>entire</italic>
duration, a more optimistic picture emerges. As highlighted in
<xref ref-type="fig" rid="pone-0110452-g004">
<bold>Figure 4A</bold>
</xref>
, 61% of MSD files (
<italic>N</italic>
 = 609,676) have a Stable Duration ≧90 s-nearly three times the number of MSD files that have a Stable Percentage = 100. Allowing BEATS to identify the Stable Segment within each audio file (rather than using the entire audio file
<italic>a priori</italic>
) yields a greater number of files that could be utilized in tempo-based playlists.</p>
<p>With respect to meter, agreement between BEATS and Echo Nest was very high, as highlighted in
<xref ref-type="fig" rid="pone-0110452-g004">
<bold>Figure 4F</bold>
:</xref>
for 99.6% (
<italic>N</italic>
 = 967,226) files, the two estimates matched exactly (e.g.,
<monospace>time_signature</monospace>
 = 4 and Estimated Meter = 
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e014.jpg"></inline-graphic>
</inline-formula>
). An unexpected result, however, also emerged: a substantial number of audio files (
<italic>N</italic>
 = 21,412) yielded an Estimated Meter = 
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e015.jpg"></inline-graphic>
</inline-formula>
. (This number was reduced to 11,164 when excluding audio files with a Stable Duration of less than 60 s.) This “odd” result was confirmed when comparing the
<monospace>time_signature</monospace>
statistic (i.e., Echo Nest’s own meter estimation) for these files; agreement was found in all cases. A cursory listening of these audio files revealed that the Estimated Meter value was, not surprisingly, inaccurate. Identifying misclassifications such as these will provide important “grist” to refine future beat tracking algorithms, a point further elaborated upon in Section 2 of the Discussion.</p>
<p>A final question pertains to correlations among the three Summary Statistics that most directly quantify the stability of an IBeI series: IBeI deviations from λ (PDL
<sub>max</sub>
), successive changes between IBeIs (SPC
<sub>max</sub>
), and IBeI drift within Runs (PTD
<sub>max</sub>
).
<xref ref-type="fig" rid="pone-0110452-g005">
<bold>Figure 5</bold>
</xref>
provides the answer, using scatter plots to visualize pairwise relationships between these three variables for the 609,676 MSD files with a Stable Duration ≧90 s. (This threshold was applied so that the scatter plot relationships would be less biased by Summary Statistics calculated from short excerpts of music.) Although the correlation between each pair of variables is positive (and “very” statistically significant given the large number of observations), it is clear that any one variable captures only a portion of what it means to be “temporally stable”.</p>
<fig id="pone-0110452-g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0110452.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Pairwise scatter plot relationships (with associated Spearman correlation ρ values) for three BEATS Summary Statistics that quantify the stability of an IBeI series: PDL
<sub>max</sub>
, SPC
<sub>max</sub>
, and PTD
<sub>max</sub>
.</title>
</caption>
<graphic xlink:href="pone.0110452.g005"></graphic>
</fig>
</sec>
<sec id="s3c">
<title>3. Interactive Exploration of Summary Statistics</title>
<p>To more effectively interact with (and benefit from) the full set of Summary Statistics, an interactive tool is required. To this end, a LAMP-based (Linux, Apache, MySQL, PHP) web interface was developed. This interface, termed iBEATS (with a permanent URL at
<ext-link ext-link-type="uri" xlink:href="http://ibeats.smcnus.org/">http://ibeats.smcnus.org/</ext-link>
), integrates the full output of BEATS with three other valuable pieces of metadata: artist name, album release year, and descriptive genre tags.</p>
<p>For each item in the MSD, album release year was obtained by querying the 7digital application programming interface (API) (
<ext-link ext-link-type="uri" xlink:href="http://developer.7digital.com">http://developer.7digital.com</ext-link>
) using the MSD variable
<monospace>release_7digitalid</monospace>
. This yielded a total of 930,852 matches, a significant improvement upon the 515,576 files with a non-zero value in the MSD
<monospace>year</monospace>
variable
<xref rid="pone.0110452-BertinMahieux1" ref-type="bibr">[83]</xref>
.</p>
<p>For each unique artist in the MSD, a set of descriptive terms were pulled (MSD variable
<monospace>artist_terms</monospace>
) covering both high-level genre (e.g., “rock”, “electronic”, “heavy metal”) and specific subgenres (e.g., “garage rock”, “deep house”, “progressive metal”, etc.), as well as broad geographic descriptors (“brazilian”, “french”, “swedish”) and specific regional influences (e.g., “brazilian pop”, “french rap”, “swedish hip hop”), and up to 10 terms with an
<monospace>artist_terms_weight</monospace>
≧0.5 for that particular artist were retained. The weight statistic, with values ranging from 0 to 1, reflects how descriptive a given term is with respect to the artist in question (as proprietarily determined by Echo Nest; cf.
<xref rid="pone.0110452-Lamere1" ref-type="bibr">[94]</xref>
), similar to a
<italic>term frequency-inverse document frequency</italic>
statistic.
<xref ref-type="table" rid="pone-0110452-t001">
<bold>Table 1</bold>
</xref>
lists the 20 terms most frequently encountered artist terms in the MSD, tallying the number of artists and the number of songs associated with each term. (The Spearman correlation between these two item counts is ρ = .966 for the 1080 terms associated with at least 10 unique artists in the MSD.) The final number of MSD items which had valid tag data, year data, and a Stable Segment of at least 10 s was 902,081.</p>
<table-wrap id="pone-0110452-t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0110452.t001</object-id>
<label>Table 1</label>
<caption>
<title>The 20 most frequent
<monospace>artist_terms</monospace>
included in the Million Song Dataset.</title>
</caption>
<alternatives>
<graphic id="pone-0110452-t001-1" xlink:href="pone.0110452.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Rank</td>
<td align="left" rowspan="1" colspan="1">Term</td>
<td align="left" rowspan="1" colspan="1">Number of artists</td>
<td align="left" rowspan="1" colspan="1">Number of songs</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">rock</td>
<td align="left" rowspan="1" colspan="1">13276</td>
<td align="left" rowspan="1" colspan="1">334709</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">electronic</td>
<td align="left" rowspan="1" colspan="1">10684</td>
<td align="left" rowspan="1" colspan="1">182981</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">pop rock</td>
<td align="left" rowspan="1" colspan="1">6455</td>
<td align="left" rowspan="1" colspan="1">185476</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">hip hop</td>
<td align="left" rowspan="1" colspan="1">6287</td>
<td align="left" rowspan="1" colspan="1">134748</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">electro</td>
<td align="left" rowspan="1" colspan="1">4921</td>
<td align="left" rowspan="1" colspan="1">88383</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">pop</td>
<td align="left" rowspan="1" colspan="1">4823</td>
<td align="left" rowspan="1" colspan="1">124291</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">indie rock</td>
<td align="left" rowspan="1" colspan="1">4699</td>
<td align="left" rowspan="1" colspan="1">102716</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">downtempo</td>
<td align="left" rowspan="1" colspan="1">4444</td>
<td align="left" rowspan="1" colspan="1">99307</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">9</td>
<td align="left" rowspan="1" colspan="1">disco</td>
<td align="left" rowspan="1" colspan="1">4241</td>
<td align="left" rowspan="1" colspan="1">104308</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">jazz</td>
<td align="left" rowspan="1" colspan="1">4192</td>
<td align="left" rowspan="1" colspan="1">117261</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">11</td>
<td align="left" rowspan="1" colspan="1">techno</td>
<td align="left" rowspan="1" colspan="1">4163</td>
<td align="left" rowspan="1" colspan="1">71281</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="left" rowspan="1" colspan="1">alternative rock</td>
<td align="left" rowspan="1" colspan="1">4117</td>
<td align="left" rowspan="1" colspan="1">98359</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">13</td>
<td align="left" rowspan="1" colspan="1">tech house</td>
<td align="left" rowspan="1" colspan="1">3930</td>
<td align="left" rowspan="1" colspan="1">71697</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">14</td>
<td align="left" rowspan="1" colspan="1">trance</td>
<td align="left" rowspan="1" colspan="1">3504</td>
<td align="left" rowspan="1" colspan="1">53061</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">15</td>
<td align="left" rowspan="1" colspan="1">chill-out</td>
<td align="left" rowspan="1" colspan="1">3440</td>
<td align="left" rowspan="1" colspan="1">89033</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">16</td>
<td align="left" rowspan="1" colspan="1">folk rock</td>
<td align="left" rowspan="1" colspan="1">3283</td>
<td align="left" rowspan="1" colspan="1">91270</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">17</td>
<td align="left" rowspan="1" colspan="1">ballad</td>
<td align="left" rowspan="1" colspan="1">3228</td>
<td align="left" rowspan="1" colspan="1">111634</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="left" rowspan="1" colspan="1">progressive house</td>
<td align="left" rowspan="1" colspan="1">3223</td>
<td align="left" rowspan="1" colspan="1">60806</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">19</td>
<td align="left" rowspan="1" colspan="1">deep house</td>
<td align="left" rowspan="1" colspan="1">3179</td>
<td align="left" rowspan="1" colspan="1">62626</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="left" rowspan="1" colspan="1">blues</td>
<td align="left" rowspan="1" colspan="1">2905</td>
<td align="left" rowspan="1" colspan="1">84925</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>
<xref ref-type="fig" rid="pone-0110452-g006">
<bold>Figure 6</bold>
</xref>
presents a screenshot of an iBEATS query. The nine Summary Statistics are visualized using histograms, similar to
<xref ref-type="fig" rid="pone-0110452-g002">Figure 2</xref>
, and can be re-thresholded at liberty. To facilitate users’ ability to navigate musical space, 952 distinct artist terms were mapped onto one of two browsable, two-level hierarchies: one covering genre/style (with organization derived in part from
<ext-link ext-link-type="uri" xlink:href="http://www.allmusic.com/genres">www.allmusic.com/genres</ext-link>
; e.g., “garage rock” is mapped to
<italic>Rock</italic>
<italic>Psychedelic/Garage</italic>
), and the other covering geography (roughly corresponding to continent and country; e.g., the term “suomi rock” is mapped to
<italic>Europe, Northern</italic>
<italic>Finland</italic>
). Additionally, specific artist names may be retrieved using text-based auto-completion (e.g., “ab” retrieves both
<italic>ABBA</italic>
and
<italic>Abbott & Costello</italic>
as options).</p>
<fig id="pone-0110452-g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0110452.g006</object-id>
<label>Figure 6</label>
<caption>
<title>The iBEATS website (
<ext-link ext-link-type="uri" xlink:href="http://ibeats.smcnus.org/">http://ibeats.smcnus.org/</ext-link>
).</title>
<p>The nine Summary Statistics are visualized using histograms (1). The user queries iBEATS by adjusting the numeric thresholds, browsing a two-level hierarchy of
<italic>Genre/Style</italic>
and
<italic>Geography</italic>
terms (2), and/or direct input to the
<italic>Artist Name</italic>
field (3). Filtering (4) reveals the number of candidate songs satisfying the query, which may then be further examined (5) and an audio sample previewed (6). The candidate playlist may then be exported (7) for subsequent use by a streaming music service (e.g., Spotify).</p>
</caption>
<graphic xlink:href="pone.0110452.g006"></graphic>
</fig>
<p>In the example shown in
<xref ref-type="fig" rid="pone-0110452-g006">Figure 6</xref>
, a playlist has been created for a hypothetical patient about to begin a gait rehabilitation paradigm. The following input parameters were used: all Rock genre songs from 1950 to the present, with a Stable Duration ≧90 s, Estimated Tempo between 115 and 125 bpm, Estimated Meter = 
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e016.jpg"></inline-graphic>
</inline-formula>
, and PDL
<sub>max</sub>
, PSD
<sub>max</sub>
, and PTD
<sub>max</sub>
all ≤5.0%. 19,725 audio files from the MSD satisfy this query, and are returned in a pop-up window; where available, 30-s audio previews are provided by making use of Echo Nest’s integration with 7digital audio previews
<xref rid="pone.0110452-TheEcho1" ref-type="bibr">[95]</xref>
. (Note that the number of available files for a particular query is
<italic>scalable</italic>
: as BEATS expands further into the 35-million-item Echo Nest catalog of metadata, so too does the number of candidate songs satisfying that query.) The final, customized playlist (including, importantly, the starting and stopping time indices demarking the Stable Segment) may then be exported for subsequent handling by a streaming music player (e.g., Spotify;
<ext-link ext-link-type="uri" xlink:href="http://www.spotify.com">www.spotify.com</ext-link>
), as described further in Section 2 of the Discussion.</p>
</sec>
</sec>
<sec id="s4">
<title>Discussion</title>
<p>Although many widely used beat tracking or tempo extraction algorithms, front-end software interfaces, and back-end metadata service providers offer point estimate statistics for the “average” tempo of an audio file, none has sought to systematically quantify the amount of
<italic>temporal instability</italic>
within an inter-beat interval (IBeI) series. Such an analysis is, we propose, acutely necessary to accurately design playlists for motor rehabilitation or rhythmic exercise paradigms, for which a stable beat is a prerequisite feature.</p>
<p>The proposed analysis tool, a “Balanced Evaluation of Auditory Temporal Stability” (BEATS), seeks to fill this need. The ultimate utility of BEATS, however, rests on (at least) two important caveats. The first caveat concerns the accuracy of the beat tracking algorithm; the second concerns the choice of thresholds used to define the Stable Segment.</p>
<sec id="s4a">
<title>1. Caveats</title>
<p>A first caveat, as noted in the Introduction, is that BEATS possesses no beat tracking capabilities itself; its raw material is a timestamp vector of beat and barline timestamps that had been previously detected by an external algorithm. For this reason, the idiosyncrasies of a particular beat tracking algorithm (or a systematic difference between two “competing” algorithms) will necessarily be reflected in whether and where BEATS identifies a Stable Segment of IBeIs. An algorithm’s beat tracking performance can be affected by both temporal (e.g., a complex rhythm loop) and non-temporal (e.g., recording quality) features of an audio file; examples of this were highlighted in
<xref ref-type="fig" rid="pone-0110452-g003">Figure 3</xref>
and detailed in Section 1 of the Results.</p>
<p>Although this fact may make BEATS
<italic>conservative</italic>
(in that some audio files will be deemed to lack a Stable Segment of a “useful” minimum duration if many Gaps are present), such conservativeness may be beneficial in practice, as it will exclude pieces of music that may in fact be too challenging for listeners to synchronize with. (An ever-larger library of processed audio files will, of course, mitigate this conservativeness.) Indeed, the relationship between how a beat tracking algorithm performs and how listeners
<italic>themselves</italic>
perform when given a beat tracking task continues to drive developments in the field
<xref rid="pone.0110452-Klapuri1" ref-type="bibr">[79]</xref>
,
<xref rid="pone.0110452-Ellis3" ref-type="bibr">[96]</xref>
<xref rid="pone.0110452-Peeters1" ref-type="bibr">[99]</xref>
. The more closely an algorithm mimics human perception with respect to how it responds to temporal instability, the higher the quality of the Summary Statistics calculated by BEATS.</p>
<p>A second caveat is that the output of BEATS depends heavily on the choice of its Initialization Thresholds (cf. Section 3 of the
<xref ref-type="sec" rid="s2">Methods</xref>
): the Local Stability Threshold (θ
<sub>Local</sub>
), Run Duration Threshold (θ
<sub>Run</sub>
), and Gap Duration Threshold (θ
<sub>Gap</sub>
). Of these three, θ
<sub>Local</sub>
perhaps has the strongest influence over the likelihood of finding a Stable Segment with a “useable” duration (e.g., ≧90 s). In the present report, a value of θ
<sub>Local</sub>
 = 5.0% was selected. This value was chosen after a careful examination of the literature exploring just-noticeable differences (JNDs) within and between auditory temporal patterns (cf. Section 1 of the Introduction)–and determining that no prior reported threshold satisfied the constraints of the current project. Thus, the pattern of Summary Statistics obtained using θ
<sub>Local</sub>
 = 5.0% should be taken as illustrative rather than prescriptive. A conservative θ
<sub>Local</sub>
value (e.g., 1.0%) would certainly
<italic>decrease</italic>
the number of available audio files with a useable Stable Duration, but at the same time
<italic>increase</italic>
the confidence that any audio files that “made the cut” were truly perceptually stable. Ultimately, adjusting both the Initialization Thresholds
<italic>and</italic>
the musical content (genre, artist, decade) to suit the needs and preferences of each target user (and the goals of the accompanying motor task) would seem the most prudent choice.</p>
</sec>
<sec id="s4b">
<title>2. Future Directions</title>
<p>The primary aim of BEATS and iBEATS is to provide accurate statistics about tempo stability in a large collection of audio files, and to make that information easily accessible to users. Increasing the size of BEATS’ library (via access to Echo Nest metadata) to provide a greater collection of potential music stimuli is planned for the immediate future. Additionally, as noted by a reviewer, the manner in which genre/style terms are made available to a user
<italic>by</italic>
iBEATS may be as important as the statistics a user is hoping to obtain
<italic>from</italic>
iBEATS. Providing additional tools for musical “navigation” would offer enhanced accessibility and, in turn, widen the potential user base.</p>
<p>Although iBEATS itself is not viable as a means of delivering a rhythmic auditory cueing paradigm, we plan to author a mobile application that would (1) take a user’s input (artist, genre, tempo range, tempo stability thresholds, etc.); (2) query BEATS and obtain a candidate playlist; and (3) deliver that playlist using existing APIs authored by licensed streaming music services such as Deezer (
<ext-link ext-link-type="uri" xlink:href="http://developers.deezer.com/">http://developers.deezer.com/</ext-link>
), Rdio (
<ext-link ext-link-type="uri" xlink:href="http://www.rdio.com/developers/">http://www.rdio.com/developers/</ext-link>
), or Spotify (
<ext-link ext-link-type="uri" xlink:href="https://developer.spotify.com/">https://developer.spotify.com/</ext-link>
). The ability to pair iBEATS with other mobile applications would offer novel ways to discover music; for example, by identifying a segment of audio using a music identification service (e.g., Shazam;
<ext-link ext-link-type="uri" xlink:href="http://www.shazam.com/">http://www.shazam.com/</ext-link>
) and then using BEATS to find music with similar temporal characteristics (a form of “query by example”; cf.
<xref rid="pone.0110452-Wang1" ref-type="bibr">[100]</xref>
), or by utilizing a touchscreen-based “query by tapping” (cf.
<xref rid="pone.0110452-Jang1" ref-type="bibr">[101]</xref>
) to more intuitively capture the desired movement rate.</p>
<p>In another vein, concurrent work from our laboratory
<xref rid="pone.0110452-Zhu1" ref-type="bibr">[102]</xref>
has sought to validate a mobile application to quantify the basic temporal dynamics of human gait in both healthy adults and Parkinson’s patients. A subject’s cadence (i.e., number of steps per minute) could then itself be used as an input parameter, creating a “query by walking” paradigm (which, although proposed previously
<xref rid="pone.0110452-Yi1" ref-type="bibr">[87]</xref>
, has yet to be explored within the music information retrieval literature).</p>
</sec>
<sec id="s4c">
<title>3. Current Applications</title>
<p>Besides these future enhancements for “front end” users, current researchers may already benefit from BEATS. For researchers seeking to improve beat tracking algorithms, for example, BEATS could be used to identify audio files with “strange” IBeI patterns (e.g.,
<xref ref-type="fig" rid="pone-0110452-g003">Figure 3D</xref>
) that may reflect an inherent limitation of a certain beat tracking algorithm, or to find those audio files with a sizable Estimated Tempo Mismatch (cf.
<xref ref-type="fig" rid="pone-0110452-g004">Figure 4E</xref>
).</p>
<p>BEATS could also prove useful with respect to identifying an algorithm’s misclassifications of meter (e.g.,
<xref rid="pone.0110452-Tomic1" ref-type="bibr">[103]</xref>
) or tempo “octave” (e.g.
<xref rid="pone.0110452-McKinney2" ref-type="bibr">[104]</xref>
). Because the Stable Segment identified by BEATS within a given audio file possesses, by definition, a repeating acoustic pattern at some
<italic>rhythmic</italic>
level (e.g., eighth note), only a brief portion of the Stable Segment should be necessary for a human annotator to (1) indicate (i.e., tap) the
<italic>pulse</italic>
level (e.g., eighth note, quarter note, half note) they felt was most natural and (2) indicate whether the meter estimated by the algorithm (e.g., 3, 4) agreed with their own perceptions. This “accelerated” annotation process would greatly reduce the labor required to confirm these important statistics and identify misclassifications (e.g., the suspiciously high number of audio files with an “Estimated Meter = 
<inline-formula>
<inline-graphic xlink:href="pone.0110452.e017.jpg"></inline-graphic>
</inline-formula>
”, as noted in Section 2 of the Results). Such audio files would provide an immediate set of
<italic>diagnostic stimuli</italic>
that could be used to compare how beat tracking algorithms-particularly those informed by computational, psychological, and neurobiological models of how human listeners track patterns in time; for recent comprehensive reviews, see
<xref rid="pone.0110452-Large1" ref-type="bibr">[12]</xref>
<xref rid="pone.0110452-Repp1" ref-type="bibr">[14]</xref>
,
<xref rid="pone.0110452-Grondin3" ref-type="bibr">[105]</xref>
,
<xref rid="pone.0110452-Patel1" ref-type="bibr">[106]</xref>
–perform relative to listeners’ ground-truth tapping annotations. Fusing “bottom-up, data-driven” retrieval methods with “top-down, knowledge-based” models of human perception, cognition, and emotion remains a key focus for the field of music information retrieval (e.g.,
<xref rid="pone.0110452-Hausdorff3" ref-type="bibr">[43]</xref>
,
<xref rid="pone.0110452-BertinMahieux1" ref-type="bibr">[83]</xref>
<xref rid="pone.0110452-Li1" ref-type="bibr">[86]</xref>
).</p>
</sec>
</sec>
<sec id="s5">
<title>Conclusion</title>
<p>We present a novel tool to quantify auditory temporal stability in recorded music (BEATS). An important departure that BEATS makes from other methods is that it seeks to identify the most temporally stable segment
<italic>within</italic>
an audio file’s inter-beat interval (IBeI) series, rather than derive a point estimate of tempo for the
<italic>entire</italic>
IBeI series. This increased flexibility enables BEATS to identify a greater number of candidate audio files for use in tempo-based music playlists. An online interface for this analysis tool, iBEATS (
<ext-link ext-link-type="uri" xlink:href="http://ibeats.smcnus.org/">http://ibeats.smcnus.org/</ext-link>
), offers straightforward visualizations, flexible parameter settings, and text-based query options for any combination of artist name, album release year, and descriptive genre/style terms. Together, BEATS and iBEATS aim to provide a wide user base (clinicians, therapists, caregivers, and exercise enthusiasts) with a new means to efficiently and effectively create highly personalized music playlists for clinical (e.g., gait rehabilitation) or recreational (e.g., rhythmic exercise) applications.</p>
</sec>
</body>
<back>
<ack>
<p>We thank Graham Percival and Zhonghua Li for fruitful discussions regarding this project, and Zhuohong Cai for much of the foundational programming.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0110452-Wikipedia1">
<label>1</label>
<mixed-citation publication-type="other">Wikipedia (2014) List of online music databases. Available:
<ext-link ext-link-type="uri" xlink:href="http://en.wikipedia.org/wiki/List_of_online_music_databases">http://en.wikipedia.org/wiki/List_of_online_music_databases</ext-link>
. Accessed 1 July 2014.</mixed-citation>
</ref>
<ref id="pone.0110452-Wikipedia2">
<label>2</label>
<mixed-citation publication-type="other">Wikipedia (2014) Comparison of online music stores. Available:
<ext-link ext-link-type="uri" xlink:href="http://en.wikipedia.org/wiki/Comparison_of_online_music_stores">http://en.wikipedia.org/wiki/Comparison_of_online_music_stores</ext-link>
. Accessed 1 July 2014.</mixed-citation>
</ref>
<ref id="pone.0110452-Wikipedia3">
<label>3</label>
<mixed-citation publication-type="other">Wikipedia (2014) Comparison of on-demand streaming music services. Available:
<ext-link ext-link-type="uri" xlink:href="http://en.wikipedia.org/wiki/Comparison_of_on-demand_streaming_music_services">http://en.wikipedia.org/wiki/Comparison_of_on-demand_streaming_music_services</ext-link>
. Accessed 1 July 2014.</mixed-citation>
</ref>
<ref id="pone.0110452-Nettl1">
<label>4</label>
<mixed-citation publication-type="other">Nettl B (2000) An ethnomusicologist contemplates universals in musical sound and musical culture. In: Wallin B, Merker B, Brown Seditors. The origins of music. Cambridge, MA: MIT Press. pp. 463–472.</mixed-citation>
</ref>
<ref id="pone.0110452-Merker1">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Merker</surname>
<given-names>BH</given-names>
</name>
,
<name>
<surname>Madison</surname>
<given-names>GS</given-names>
</name>
,
<name>
<surname>Eckerdal</surname>
<given-names>P</given-names>
</name>
(
<year>2009</year>
)
<article-title>On the role and origin of isochrony in human rhythmic entrainment</article-title>
.
<source>Cortex</source>
<volume>45</volume>
:
<fpage>4</fpage>
<lpage>17</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.cortex.2008.06.011">10.1016/j.cortex.2008.06.011</ext-link>
</comment>
<pub-id pub-id-type="pmid">19046745</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Winkler1">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Winkler</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Háden</surname>
<given-names>GP</given-names>
</name>
,
<name>
<surname>Ladinig</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Sziller</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Honing</surname>
<given-names>H</given-names>
</name>
(
<year>2009</year>
)
<article-title>Newborn infants detect the beat in music</article-title>
.
<source>Proc Natl Acad Sci</source>
<volume>106</volume>
:
<fpage>2468</fpage>
<lpage>2471</lpage>
.
<pub-id pub-id-type="pmid">19171894</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Zentner1">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Zentner</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Eerola</surname>
<given-names>T</given-names>
</name>
(
<year>2010</year>
)
<article-title>Rhythmic engagement with music in infancy</article-title>
.
<source>Proc Natl Acad Sci U S A</source>
<volume>107</volume>
:
<fpage>5768</fpage>
<lpage>5773</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1073/pnas.1000121107">10.1073/pnas.1000121107</ext-link>
</comment>
<pub-id pub-id-type="pmid">20231438</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Ellis1">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ellis</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Jones</surname>
<given-names>MR</given-names>
</name>
(
<year>2010</year>
)
<article-title>Rhythmic context modulates foreperiod effects</article-title>
.
<source>Atten Percept Psychophys</source>
<volume>72</volume>
:
<fpage>2274</fpage>
<lpage>2288</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/APP.72.8.2274">10.3758/APP.72.8.2274</ext-link>
</comment>
<pub-id pub-id-type="pmid">21097869</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Honing1">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Honing</surname>
<given-names>H</given-names>
</name>
(
<year>2012</year>
)
<article-title>Without it no music: beat induction as a fundamental musical trait</article-title>
.
<source>Ann N Y Acad Sci</source>
<volume>1252</volume>
:
<fpage>85</fpage>
<lpage>91</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1749-6632.2011.06402.x">10.1111/j.1749-6632.2011.06402.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">22524344</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Janata1">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Janata</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Tomic</surname>
<given-names>ST</given-names>
</name>
,
<name>
<surname>Haberman</surname>
<given-names>JM</given-names>
</name>
(
<year>2012</year>
)
<article-title>Sensorimotor coupling in music and the psychology of the groove</article-title>
.
<source>J Exp Psychol Gen</source>
<volume>141</volume>
:
<fpage>54</fpage>
<lpage>75</lpage>
.
<pub-id pub-id-type="pmid">21767048</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Jones1">
<label>11</label>
<mixed-citation publication-type="other">Jones MR (2008) Musical time. In: Hallam S, Cross I, Thaut Meditors. Oxford Handbook of Music Psychology. New York: Oxford. pp. 81–92.</mixed-citation>
</ref>
<ref id="pone.0110452-Large1">
<label>12</label>
<mixed-citation publication-type="other">Large EW (2010) Neurodynamics of music. In: Jones MR, Fay RR, Popper ANeditors. Springer Handbook of Auditory Research, Vol. 36: Music Perception. New York: Springer. pp. 201–231.</mixed-citation>
</ref>
<ref id="pone.0110452-McAuley1">
<label>13</label>
<mixed-citation publication-type="other">McAuley JD (2010) Tempo and rhythm. In: Jones MR, Fay RR, Popper ANeditors. Springer Handbook of Auditory Research, Vol. 36: Music Perception. New York: Springer. pp. 165–199.</mixed-citation>
</ref>
<ref id="pone.0110452-Repp1">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Repp</surname>
<given-names>BH</given-names>
</name>
,
<name>
<surname>Su</surname>
<given-names>Y-H</given-names>
</name>
(
<year>2013</year>
)
<article-title>Sensorimotor synchronization: A review of recent research (2006–2012)</article-title>
.
<source>Psychon Bull Rev</source>
<volume>20</volume>
:
<fpage>403</fpage>
<lpage>452</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/s13423-012-0371-2">10.3758/s13423-012-0371-2</ext-link>
</comment>
<pub-id pub-id-type="pmid">23397235</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Karageorghis1">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Karageorghis</surname>
<given-names>CI</given-names>
</name>
,
<name>
<surname>Priest</surname>
<given-names>D-L</given-names>
</name>
(
<year>2012</year>
)
<article-title>Music in the exercise domain: a review and synthesis (Part I)</article-title>
.
<source>Int Rev Sport Exerc Psychol</source>
<volume>5</volume>
:
<fpage>44</fpage>
<lpage>66</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1080/1750984X.2011.631026">10.1080/1750984X.2011.631026</ext-link>
</comment>
<pub-id pub-id-type="pmid">22577472</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Karageorghis2">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Karageorghis</surname>
<given-names>CI</given-names>
</name>
,
<name>
<surname>Priest</surname>
<given-names>D-L</given-names>
</name>
(
<year>2012</year>
)
<article-title>Music in the exercise domain: a review and synthesis (Part II)</article-title>
.
<source>Int Rev Sport Exerc Psychol</source>
<volume>5</volume>
:
<fpage>67</fpage>
<lpage>84</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1080/1750984X.2011.631027">10.1080/1750984X.2011.631027</ext-link>
</comment>
<pub-id pub-id-type="pmid">22577473</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Karageorghis3">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Karageorghis</surname>
<given-names>CI</given-names>
</name>
,
<name>
<surname>Terry</surname>
<given-names>PC</given-names>
</name>
,
<name>
<surname>Lane</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Bishop</surname>
<given-names>DT</given-names>
</name>
,
<name>
<surname>Priest</surname>
<given-names>D</given-names>
</name>
(
<year>2012</year>
)
<article-title>The BASES Expert Statement on use of music in exercise</article-title>
.
<source>J Sports Sci</source>
<volume>30</volume>
:
<fpage>953</fpage>
<lpage>956</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1080/02640414.2012.676665">10.1080/02640414.2012.676665</ext-link>
</comment>
<pub-id pub-id-type="pmid">22512537</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Barnes1">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Barnes</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Jones</surname>
<given-names>MR</given-names>
</name>
(
<year>2000</year>
)
<article-title>Expectancy, attention, and time</article-title>
.
<source>Cognit Psychol</source>
<volume>41</volume>
:
<fpage>254</fpage>
<lpage>311</lpage>
.
<pub-id pub-id-type="pmid">11032658</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Jones2">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jones</surname>
<given-names>MR</given-names>
</name>
,
<name>
<surname>Boltz</surname>
<given-names>M</given-names>
</name>
(
<year>1989</year>
)
<article-title>Dynamic attending and responses to time</article-title>
.
<source>Psychol Rev</source>
<volume>96</volume>
:
<fpage>459</fpage>
<lpage>491</lpage>
.
<pub-id pub-id-type="pmid">2756068</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Large2">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Large</surname>
<given-names>EW</given-names>
</name>
,
<name>
<surname>Jones</surname>
<given-names>MR</given-names>
</name>
(
<year>1999</year>
)
<article-title>The Dynamics of Attending: How People Track Time-Varying Events</article-title>
.
<source>Psychol Rev</source>
<volume>106</volume>
:
<fpage>119</fpage>
<lpage>159</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Salimpoor1">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Salimpoor</surname>
<given-names>VN</given-names>
</name>
,
<name>
<surname>Benovoy</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Longo</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Cooperstock</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
(
<year>2009</year>
)
<article-title>The rewarding aspects of music listening are related to degree of emotional arousal</article-title>
.
<source>PloS One</source>
<volume>4</volume>
:
<fpage>e7487</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0007487">10.1371/journal.pone.0007487</ext-link>
</comment>
<pub-id pub-id-type="pmid">19834599</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Thompson1">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thompson</surname>
<given-names>WF</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>EG</given-names>
</name>
,
<name>
<surname>Husain</surname>
<given-names>G</given-names>
</name>
(
<year>2001</year>
)
<article-title>Arousal, mood, and the Mozart effect</article-title>
.
<source>Psychol Sci</source>
<volume>12</volume>
:
<fpage>248</fpage>
.
<pub-id pub-id-type="pmid">11437309</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Copeland1">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Copeland</surname>
<given-names>BL</given-names>
</name>
,
<name>
<surname>Franks</surname>
<given-names>BD</given-names>
</name>
(
<year>1991</year>
)
<article-title>Effects of types and intensities of background music on treadmill endurance</article-title>
.
<source>J Sports Med Phys Fitness</source>
<volume>31</volume>
:
<fpage>100</fpage>
<lpage>103</lpage>
.
<pub-id pub-id-type="pmid">1861474</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Brownley1">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brownley</surname>
<given-names>KA</given-names>
</name>
,
<name>
<surname>McMurray</surname>
<given-names>RG</given-names>
</name>
,
<name>
<surname>Hackney</surname>
<given-names>AC</given-names>
</name>
(
<year>1995</year>
)
<article-title>Effects of music on physiological and affective responses to graded treadmill exercise in trained and untrained runners</article-title>
.
<source>Int J Psychophysiol</source>
<volume>19</volume>
:
<fpage>193</fpage>
<lpage>201</lpage>
.
<pub-id pub-id-type="pmid">7558986</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Johnson1">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Johnson</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Otto</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Clair</surname>
<given-names>AA</given-names>
</name>
(
<year>2001</year>
)
<article-title>The effect of instrumental and vocal music on adherence to a physical rehabilitation exercise program with persons who are elderly</article-title>
.
<source>J Music Ther</source>
<volume>38</volume>
:
<fpage>82</fpage>
<lpage>96</lpage>
.
<pub-id pub-id-type="pmid">11469917</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-SnedenRiley1">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sneden-Riley</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Waters</surname>
<given-names>L</given-names>
</name>
(
<year>2001</year>
)
<article-title>The effect of instrumental and vocal music on adherence to a physical rehabilitation exercise program with persons who are elderly</article-title>
.
<source>J Music Ther</source>
<volume>38</volume>
:
<fpage>82</fpage>
<lpage>96</lpage>
.
<pub-id pub-id-type="pmid">11469917</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-GuzmnGarca1">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Guzmán-García</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Hughes</surname>
<given-names>JC</given-names>
</name>
,
<name>
<surname>James</surname>
<given-names>IA</given-names>
</name>
,
<name>
<surname>Rochester</surname>
<given-names>L</given-names>
</name>
(
<year>2013</year>
)
<article-title>Dancing as a psychosocial intervention in care homes: a systematic review of the literature</article-title>
.
<source>Int J Geriatr Psychiatry</source>
<volume>28</volume>
:
<fpage>914</fpage>
<lpage>924</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1002/gps.3913">10.1002/gps.3913</ext-link>
</comment>
<pub-id pub-id-type="pmid">23225749</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Kattenstroth1">
<label>28</label>
<mixed-citation publication-type="other">Kattenstroth J-C, Kolankowska I, Kalisch T, Dinse HR (2010) Superior sensory, motor, and cognitive performance in elderly individuals with multi-year dancing activities. Front Aging Neurosci 2. doi:doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3389/fnagi.2010.00031">10.3389/fnagi.2010.00031</ext-link>
..</mixed-citation>
</ref>
<ref id="pone.0110452-Verghese1">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Verghese</surname>
<given-names>J</given-names>
</name>
(
<year>2006</year>
)
<article-title>Cognitive and mobility profile of older social dancers</article-title>
.
<source>J Am Geriatr Soc</source>
<volume>54</volume>
:
<fpage>1241</fpage>
<lpage>1244</lpage>
.
<pub-id pub-id-type="pmid">16913992</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Martin1">
<label>30</label>
<mixed-citation publication-type="other">Martin JP (1967) The basal ganglia and posture. Lippincott. Available:
<ext-link ext-link-type="uri" xlink:href="http://www.getcited.org/pub/101234604">http://www.getcited.org/pub/101234604</ext-link>
. Accessed 4 November 2012.</mixed-citation>
</ref>
<ref id="pone.0110452-VonWilzenben1">
<label>31</label>
<mixed-citation publication-type="other">Von Wilzenben HD (1942) Methods in the treatment of post encephalic Parkinson’s. New York: Grune and Stratten.</mixed-citation>
</ref>
<ref id="pone.0110452-Morris1">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Morris</surname>
<given-names>ME</given-names>
</name>
,
<name>
<surname>Iansek</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Matyas</surname>
<given-names>TA</given-names>
</name>
,
<name>
<surname>Summers</surname>
<given-names>JJ</given-names>
</name>
(
<year>1994</year>
)
<article-title>The pathogenesis of gait hypokinesia in Parkinson’s disease</article-title>
.
<source>Brain</source>
<volume>117</volume>
:
<fpage>1169</fpage>
<lpage>1181</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/brain/117.5.1169">10.1093/brain/117.5.1169</ext-link>
</comment>
<pub-id pub-id-type="pmid">7953597</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Thaut1">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thaut</surname>
<given-names>MH</given-names>
</name>
,
<name>
<surname>McIntosh</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Rice</surname>
<given-names>RR</given-names>
</name>
,
<name>
<surname>Miller</surname>
<given-names>RA</given-names>
</name>
,
<name>
<surname>Rathbun</surname>
<given-names>J</given-names>
</name>
,
<etal>et al</etal>
(
<year>1996</year>
)
<article-title>Rhythmic auditory stimulation in gait training for Parkinson’s disease patients</article-title>
.
<source>Mov Disord Off J Mov Disord Soc</source>
<volume>11</volume>
:
<fpage>193</fpage>
<lpage>200</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-DeBruin1">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>De Bruin</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Doan</surname>
<given-names>JB</given-names>
</name>
,
<name>
<surname>Turnbull</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Suchowersky</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Bonfield</surname>
<given-names>S</given-names>
</name>
,
<etal>et al</etal>
(
<year>2010</year>
)
<article-title>Walking with music is a safe and viable tool for gait training in Parkinson’s disease: the effect of a 13-week feasibility study on single and dual task walking</article-title>
.
<source>Park Dis</source>
<volume>2010</volume>
:
<fpage>483530</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.4061/2010/483530">10.4061/2010/483530</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0110452-Pacchetti1">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pacchetti</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Mancini</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Aglieri</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Fundarò</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Martignoni</surname>
<given-names>E</given-names>
</name>
,
<etal>et al</etal>
(
<year>2000</year>
)
<article-title>Active music therapy in Parkinson’s disease: an integrative method for motor and emotional rehabilitation</article-title>
.
<source>Psychosom Med</source>
<volume>62</volume>
:
<fpage>386</fpage>
<lpage>393</lpage>
.
<pub-id pub-id-type="pmid">10845352</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Lim1">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lim</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Van Wegen</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>De Goede</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Deutekom</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Nieuwboer</surname>
<given-names>A</given-names>
</name>
,
<etal>et al</etal>
(
<year>2005</year>
)
<article-title>Effects of external rhythmical cueing on gait in patients with Parkinson’s disease: a systematic review</article-title>
.
<source>Clin Rehabil</source>
<volume>19</volume>
:
<fpage>695</fpage>
<lpage>713</lpage>
.
<pub-id pub-id-type="pmid">16250189</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Rubinstein1">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rubinstein</surname>
<given-names>TC</given-names>
</name>
,
<name>
<surname>Giladi</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Hausdorff</surname>
<given-names>JM</given-names>
</name>
(
<year>2002</year>
)
<article-title>The power of cueing to circumvent dopamine deficits: a review of physical therapy treatment of gait disturbances in Parkinson’s disease</article-title>
.
<source>Mov Disord</source>
<volume>17</volume>
:
<fpage>1148</fpage>
<lpage>1160</lpage>
.
<pub-id pub-id-type="pmid">12465051</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-DeDreu1">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>De Dreu</surname>
<given-names>MJ</given-names>
</name>
,
<name>
<surname>van der Wilk</surname>
<given-names>ASD</given-names>
</name>
,
<name>
<surname>Poppe</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Kwakkel</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>van Wegen</surname>
<given-names>EEH</given-names>
</name>
(
<year>2012</year>
)
<article-title>Rehabilitation, exercise therapy and music in patients with Parkinson’s disease: a meta-analysis of the effects of music-based movement therapy on walking ability, balance and quality of life</article-title>
.
<source>Parkinsonism Relat Disord</source>
<volume>18</volume>
<issue>Suppl 1</issue>
:
<fpage>S114</fpage>
<lpage>S119</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/S1353-8020(11)70036-0">10.1016/S1353-8020(11)70036-0</ext-link>
</comment>
<pub-id pub-id-type="pmid">22166406</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Spaulding1">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Spaulding</surname>
<given-names>SJ</given-names>
</name>
,
<name>
<surname>Barber</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Colby</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Cormack</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Mick</surname>
<given-names>T</given-names>
</name>
,
<etal>et al</etal>
(
<year>2013</year>
)
<article-title>Cueing and gait improvement among people with Parkinson’s disease: a meta-analysis</article-title>
.
<source>Arch Phys Med Rehabil</source>
<volume>94</volume>
:
<fpage>562</fpage>
<lpage>570</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.apmr.2012.10.026">10.1016/j.apmr.2012.10.026</ext-link>
</comment>
<pub-id pub-id-type="pmid">23127307</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Keus1">
<label>40</label>
<mixed-citation publication-type="other">Keus SHJ, Bloem BR, Hendriks EJM, Bredero-Cohen AB, Munneke M (2007) Evidence-based analysis of physical therapy in Parkinson’s disease with recommendations for practice and research. Mov Disord Off J Mov Disord Soc 22: 451–460; quiz 600. doi:doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1002/mds.21244">10.1002/mds.21244</ext-link>
..</mixed-citation>
</ref>
<ref id="pone.0110452-Hausdorff1">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hausdorff</surname>
<given-names>J</given-names>
</name>
(
<year>2005</year>
)
<article-title>Gait variability: methods, modeling and meaning</article-title>
.
<source>J NeuroEngineering Rehabil</source>
<volume>2</volume>
:
<fpage>19</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1186/1743-0003-2-19">10.1186/1743-0003-2-19</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0110452-Hausdorff2">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hausdorff</surname>
<given-names>JM</given-names>
</name>
(
<year>2007</year>
)
<article-title>Gait dynamics, fractals and falls: finding meaning in the stride-to-stride fluctuations of human walking</article-title>
.
<source>Hum Mov Sci</source>
<volume>26</volume>
:
<fpage>555</fpage>
<lpage>589</lpage>
.
<pub-id pub-id-type="pmid">17618701</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Hausdorff3">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hausdorff</surname>
<given-names>JM</given-names>
</name>
(
<year>2009</year>
)
<article-title>Gait dynamics in Parkinson’s disease: common and distinct behavior among stride length, gait variability, and fractal-like scaling</article-title>
.
<source>Chaos Woodbury N</source>
<volume>19</volume>
:
<fpage>026113</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1063/1.3147408">10.1063/1.3147408</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0110452-Schaafsma1">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schaafsma</surname>
<given-names>JD</given-names>
</name>
,
<name>
<surname>Giladi</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Balash</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Bartels</surname>
<given-names>AL</given-names>
</name>
,
<name>
<surname>Gurevich</surname>
<given-names>T</given-names>
</name>
,
<etal>et al</etal>
(
<year>2003</year>
)
<article-title>Gait dynamics in Parkinson’s disease: relationship to Parkinsonian features, falls and response to levodopa</article-title>
.
<source>J Neurol Sci</source>
<volume>212</volume>
:
<fpage>47</fpage>
<lpage>53</lpage>
.
<pub-id pub-id-type="pmid">12809998</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Hausdorff4">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hausdorff</surname>
<given-names>JM</given-names>
</name>
,
<name>
<surname>Rios</surname>
<given-names>DA</given-names>
</name>
,
<name>
<surname>Edelberg</surname>
<given-names>HK</given-names>
</name>
(
<year>2001</year>
)
<article-title>Gait variability and fall risk in community-living older adults: A 1-year prospective study</article-title>
.
<source>Arch Phys Med Rehabil</source>
<volume>82</volume>
:
<fpage>1050</fpage>
<lpage>1056</lpage>
.
<pub-id pub-id-type="pmid">11494184</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Davis1">
<label>46</label>
<mixed-citation publication-type="journal">
<name>
<surname>Davis</surname>
<given-names>JC</given-names>
</name>
,
<name>
<surname>Robertson</surname>
<given-names>MC</given-names>
</name>
,
<name>
<surname>Ashe</surname>
<given-names>MC</given-names>
</name>
,
<name>
<surname>Liu-Ambrose</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Khan</surname>
<given-names>KM</given-names>
</name>
,
<etal>et al</etal>
(
<year>2010</year>
)
<article-title>International comparison of cost of falls in older adults living in the community: a systematic review</article-title>
.
<source>Osteoporos Int J Establ Result Coop Eur Found Osteoporos Natl Osteoporos Found USA</source>
<volume>21</volume>
:
<fpage>1295</fpage>
<lpage>1306</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00198-009-1162-0">10.1007/s00198-009-1162-0</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0110452-Bloem1">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bloem</surname>
<given-names>BR</given-names>
</name>
,
<name>
<surname>Hausdorff</surname>
<given-names>JM</given-names>
</name>
,
<name>
<surname>Visser</surname>
<given-names>JE</given-names>
</name>
,
<name>
<surname>Giladi</surname>
<given-names>N</given-names>
</name>
(
<year>2004</year>
)
<article-title>Falls and freezing of gait in Parkinson’s disease: a review of two interconnected, episodic phenomena</article-title>
.
<source>Mov Disord Off J Mov Disord Soc</source>
<volume>19</volume>
:
<fpage>871</fpage>
<lpage>884</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Delval1">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Delval</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Krystkowiak</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Delliaux</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Blatt</surname>
<given-names>J-L</given-names>
</name>
,
<name>
<surname>Derambure</surname>
<given-names>P</given-names>
</name>
,
<etal>et al</etal>
(
<year>2008</year>
)
<article-title>Effect of external cueing on gait in Huntington’s disease</article-title>
.
<source>Mov Disord</source>
<volume>23</volume>
:
<fpage>1446</fpage>
<lpage>1452</lpage>
.
<pub-id pub-id-type="pmid">18512747</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Thaut2">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thaut</surname>
<given-names>MH</given-names>
</name>
,
<name>
<surname>Miltner</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Lange</surname>
<given-names>HW</given-names>
</name>
,
<name>
<surname>Hurt</surname>
<given-names>CP</given-names>
</name>
,
<name>
<surname>Hoemberg</surname>
<given-names>V</given-names>
</name>
(
<year>1999</year>
)
<article-title>Velocity modulation and rhythmic synchronization of gait in Huntington’s disease</article-title>
.
<source>Mov Disord</source>
<volume>14</volume>
:
<fpage>808</fpage>
<lpage>819</lpage>
.
<pub-id pub-id-type="pmid">10495043</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Thaut3">
<label>50</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thaut</surname>
<given-names>MH</given-names>
</name>
,
<name>
<surname>McIntosh</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Prassas</surname>
<given-names>SG</given-names>
</name>
,
<name>
<surname>Rice</surname>
<given-names>RR</given-names>
</name>
(
<year>1993</year>
)
<article-title>Effect of Rhythmic Auditory Cuing on Temporal Stride Parameters and EMG. Patterns in Hemiparetic Gait of Stroke Patients</article-title>
.
<source>Neurorehabil Neural Repair</source>
<volume>7</volume>
:
<fpage>9</fpage>
<lpage>16</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Thaut4">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thaut</surname>
<given-names>MH</given-names>
</name>
,
<name>
<surname>Leins</surname>
<given-names>AK</given-names>
</name>
,
<name>
<surname>Rice</surname>
<given-names>RR</given-names>
</name>
,
<name>
<surname>Argstatter</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Kenyon</surname>
<given-names>GP</given-names>
</name>
,
<etal>et al</etal>
(
<year>2007</year>
)
<article-title>Rhythmic auditory stimulation improves gait more than NDT/Bobath training in near-ambulatory patients early poststroke: a single-blind, randomized trial</article-title>
.
<source>Neurorehabil Neural Repair</source>
<volume>21</volume>
:
<fpage>455</fpage>
<lpage>459</lpage>
.
<pub-id pub-id-type="pmid">17426347</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-DelEtoile1">
<label>52</label>
<mixed-citation publication-type="journal">
<name>
<surname>De l’ Etoile</surname>
<given-names>SK</given-names>
</name>
(
<year>2008</year>
)
<article-title>The effect of rhythmic auditory stimulation on the gait parameters of patients with incomplete spinal cord injury: an exploratory pilot study</article-title>
.
<source>Int J Rehabil Res Int Z Für Rehabil Rev Int Rech Réadapt</source>
<volume>31</volume>
:
<fpage>155</fpage>
<lpage>157</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Hurt1">
<label>53</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hurt</surname>
<given-names>CP</given-names>
</name>
,
<name>
<surname>Rice</surname>
<given-names>RR</given-names>
</name>
,
<name>
<surname>McIntosh</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Thaut</surname>
<given-names>MH</given-names>
</name>
(
<year>1998</year>
)
<article-title>Rhythmic Auditory Stimulation in Gait Training for Patients with Traumatic Brain Injury</article-title>
.
<source>J Music Ther</source>
<volume>35</volume>
:
<fpage>228</fpage>
<lpage>241</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/jmt/35.4.228">10.1093/jmt/35.4.228</ext-link>
</comment>
<pub-id pub-id-type="pmid">10519837</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Wittwer1">
<label>54</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wittwer</surname>
<given-names>JE</given-names>
</name>
,
<name>
<surname>Webster</surname>
<given-names>KE</given-names>
</name>
,
<name>
<surname>Hill</surname>
<given-names>K</given-names>
</name>
(
<year>2013</year>
)
<article-title>Rhythmic auditory cueing to improve walking in patients with neurological conditions other than Parkinson’s disease–what is the evidence?</article-title>
<source>Disabil Rehabil</source>
<volume>35</volume>
:
<fpage>164</fpage>
<lpage>176</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3109/09638288.2012.690495">10.3109/09638288.2012.690495</ext-link>
</comment>
<pub-id pub-id-type="pmid">22681598</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Ehrl1">
<label>55</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ehrlé</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Samson</surname>
<given-names>S</given-names>
</name>
(
<year>2005</year>
)
<article-title>Auditory discrimination of anisochrony: Influence of the tempo and musical backgrounds of listeners</article-title>
.
<source>Brain Cogn</source>
<volume>58</volume>
:
<fpage>133</fpage>
<lpage>147</lpage>
.
<pub-id pub-id-type="pmid">15878734</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Friberg1">
<label>56</label>
<mixed-citation publication-type="journal">
<name>
<surname>Friberg</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Sundberg</surname>
<given-names>J</given-names>
</name>
(
<year>1995</year>
)
<article-title>Time discrimination in a monotonic, isochronous sequence</article-title>
.
<source>J Acoust Soc Am</source>
<volume>98</volume>
:
<fpage>2524</fpage>
<lpage>2531</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Grondin1">
<label>57</label>
<mixed-citation publication-type="journal">
<name>
<surname>Grondin</surname>
<given-names>S</given-names>
</name>
(
<year>2001</year>
)
<article-title>From physical time to the first and second moments of psychological time</article-title>
.
<source>Psychol Bull</source>
<volume>127</volume>
:
<fpage>22</fpage>
<lpage>44</lpage>
.
<pub-id pub-id-type="pmid">11271754</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Woodrow1">
<label>58</label>
<mixed-citation publication-type="other">Woodrow H, Stevens S
<bold>.</bold>
(1951) Time perception. Handbook of experimental psychology. New York: Wiley. 1224–1236.</mixed-citation>
</ref>
<ref id="pone.0110452-Getty1">
<label>59</label>
<mixed-citation publication-type="journal">
<name>
<surname>Getty</surname>
<given-names>DJ</given-names>
</name>
(
<year>1975</year>
)
<article-title>Discrimination of short temporal intervals: A comparison of two models</article-title>
.
<source>Percept Psychophys</source>
<volume>18</volume>
:
<fpage>1</fpage>
<lpage>8</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Jones3">
<label>60</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jones</surname>
<given-names>MR</given-names>
</name>
,
<name>
<surname>Yee</surname>
<given-names>W</given-names>
</name>
(
<year>1997</year>
)
<article-title>Sensitivity to time change: The role of context and skill</article-title>
.
<source>J Exp Psychol Hum Percept Perform</source>
<volume>23</volume>
:
<fpage>693</fpage>
<lpage>709</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Schulze1">
<label>61</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schulze</surname>
<given-names>H-H</given-names>
</name>
(
<year>1978</year>
)
<article-title>The detectability of local and global displacements in regular rhythmic patterns</article-title>
.
<source>Psychol Res</source>
<volume>40</volume>
:
<fpage>173</fpage>
<lpage>181</lpage>
.
<pub-id pub-id-type="pmid">693733</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Drake1">
<label>62</label>
<mixed-citation publication-type="journal">
<name>
<surname>Drake</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Botte</surname>
<given-names>MC</given-names>
</name>
(
<year>1993</year>
)
<article-title>Tempo sensitivity in auditory sequences: evidence for a multiple-look model</article-title>
.
<source>Percept Psychophys</source>
<volume>54</volume>
:
<fpage>277</fpage>
<lpage>286</lpage>
.
<pub-id pub-id-type="pmid">8414886</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Schulze2">
<label>63</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schulze</surname>
<given-names>HH</given-names>
</name>
(
<year>1989</year>
)
<article-title>The perception of temporal deviations in isochronic patterns</article-title>
.
<source>Percept Psychophys</source>
<volume>45</volume>
:
<fpage>291</fpage>
<lpage>296</lpage>
.
<pub-id pub-id-type="pmid">2710629</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-McAuley2">
<label>64</label>
<mixed-citation publication-type="journal">
<name>
<surname>McAuley</surname>
<given-names>JD</given-names>
</name>
,
<name>
<surname>Miller</surname>
<given-names>NS</given-names>
</name>
(
<year>2007</year>
)
<article-title>Picking up the pace: Effects of global temporal context on sensitivity to the tempo of auditory sequences</article-title>
.
<source>Percept Psychophys</source>
<volume>69</volume>
:
<fpage>709</fpage>
<lpage>718</lpage>
.
<pub-id pub-id-type="pmid">17929694</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Miller1">
<label>65</label>
<mixed-citation publication-type="journal">
<name>
<surname>Miller</surname>
<given-names>NS</given-names>
</name>
,
<name>
<surname>McAuley</surname>
<given-names>JD</given-names>
</name>
(
<year>2005</year>
)
<article-title>Tempo sensitivity in isochronous tone sequences: the multiple-look model revisited</article-title>
.
<source>Percept Psychophys</source>
<volume>67</volume>
:
<fpage>1150</fpage>
<lpage>1160</lpage>
.
<pub-id pub-id-type="pmid">16502837</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Grondin2">
<label>66</label>
<mixed-citation publication-type="journal">
<name>
<surname>Grondin</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Laforest</surname>
<given-names>M</given-names>
</name>
(
<year>2004</year>
)
<article-title>Discriminating the tempo variations of a musical excerpt</article-title>
.
<source>Acoust Sci Technol</source>
<volume>25</volume>
:
<fpage>159</fpage>
<lpage>162</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Sorkin1">
<label>67</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sorkin</surname>
<given-names>RD</given-names>
</name>
,
<name>
<surname>Boggs</surname>
<given-names>GJ</given-names>
</name>
,
<name>
<surname>Brady</surname>
<given-names>SL</given-names>
</name>
(
<year>1982</year>
)
<article-title>Discrimination of temporal jitter in patterned sequences of tones</article-title>
.
<source>J Exp Psychol Hum Percept Perform</source>
<volume>8</volume>
:
<fpage>46</fpage>
<lpage>57</lpage>
.
<pub-id pub-id-type="pmid">6460084</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Thaut5">
<label>68</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thaut</surname>
<given-names>MH</given-names>
</name>
,
<name>
<surname>Tian</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Azimi-Sadjadi</surname>
<given-names>MR</given-names>
</name>
(
<year>1998</year>
)
<article-title>Rhythmic finger tapping to cosine-wave modulated metronome sequences: Evidence of subliminal entrainment</article-title>
.
<source>Hum Mov Sci</source>
<volume>17</volume>
:
<fpage>839</fpage>
<lpage>863</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Cope1">
<label>69</label>
<mixed-citation publication-type="journal">
<name>
<surname>Cope</surname>
<given-names>TE</given-names>
</name>
,
<name>
<surname>Grube</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Griffiths</surname>
<given-names>TD</given-names>
</name>
(
<year>2012</year>
)
<article-title>Temporal predictions based on a gradual change in tempo</article-title>
.
<source>J Acoust Soc Am</source>
<volume>131</volume>
:
<fpage>4013</fpage>
<lpage>4022</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1121/1.3699266">10.1121/1.3699266</ext-link>
</comment>
<pub-id pub-id-type="pmid">22559374</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Pouliot1">
<label>70</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pouliot</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Grondin</surname>
<given-names>S</given-names>
</name>
(
<year>2005</year>
)
<article-title>A response-time approach for estimating sensitivity to auditory tempo changes</article-title>
.
<source>Music Percept</source>
<volume>22</volume>
:
<fpage>389</fpage>
<lpage>399</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Schulze3">
<label>71</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schulze</surname>
<given-names>H-H</given-names>
</name>
,
<name>
<surname>Cordes</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Vorberg</surname>
<given-names>D</given-names>
</name>
(
<year>2005</year>
)
<article-title>Keeping synchrony while tempo changes: Accelerando and ritardando</article-title>
.
<source>Music Percept</source>
<volume>22</volume>
:
<fpage>461</fpage>
<lpage>477</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Krumhansl1">
<label>72</label>
<mixed-citation publication-type="other">Krumhansl CL (1990) Cognitive foundations of musical pitch. New York: Oxford.</mixed-citation>
</ref>
<ref id="pone.0110452-Krumhansl2">
<label>73</label>
<mixed-citation publication-type="other">Krumhansl CL, Cuddy LL (2010) A theory of tonal hierarchies in music. Music perception. Springer. 51–87.</mixed-citation>
</ref>
<ref id="pone.0110452-Bigand1">
<label>74</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
(
<year>1997</year>
)
<article-title>Perceiving musical stability: The effect of tonal structure, rhythm, and musical expertise</article-title>
.
<source>J Exp Psychol Hum Percept Perform</source>
<volume>23</volume>
:
<fpage>808</fpage>
<lpage>822</lpage>
.
<pub-id pub-id-type="pmid">9180045</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Casey1">
<label>75</label>
<mixed-citation publication-type="journal">
<name>
<surname>Casey</surname>
<given-names>MA</given-names>
</name>
,
<name>
<surname>Veltkamp</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Goto</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Leman</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Rhodes</surname>
<given-names>C</given-names>
</name>
,
<etal>et al</etal>
(
<year>2008</year>
)
<article-title>Content-based music information retrieval: Current directions and future challenges</article-title>
.
<source>Proc IEEE</source>
<volume>96</volume>
:
<fpage>668</fpage>
<lpage>696</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-The1">
<label>76</label>
<mixed-citation publication-type="other">The S2S2 Consortium (2007). A Roadmap for Sound and Music Computing (2007) A roadmap for sound and music computing. Available:
<ext-link ext-link-type="uri" xlink:href="http://www.smcnetwork.org/files/Roadmap-v1.0.pdf">http://www.smcnetwork.org/files/Roadmap-v1.0.pdf</ext-link>
. Accessed 1 July 2014.</mixed-citation>
</ref>
<ref id="pone.0110452-Ra1">
<label>77</label>
<mixed-citation publication-type="other">Raś ZW, Wieczorkowska A
<bold>, editors</bold>
(2010) Advances in music information retrieval. New York: Springer.</mixed-citation>
</ref>
<ref id="pone.0110452-Gouyon1">
<label>78</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gouyon</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Klapuri</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Dixon</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Alonso</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Tzanetakis</surname>
<given-names>G</given-names>
</name>
,
<etal>et al</etal>
(
<year>2006</year>
)
<article-title>An experimental comparison of audio tempo induction algorithms</article-title>
.
<source>IEEE Trans Audio Speech Lang Process</source>
<volume>14</volume>
:
<fpage>1832</fpage>
<lpage>1844</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Klapuri1">
<label>79</label>
<mixed-citation publication-type="journal">
<name>
<surname>Klapuri</surname>
<given-names>AP</given-names>
</name>
,
<name>
<surname>Eronen</surname>
<given-names>AJ</given-names>
</name>
,
<name>
<surname>Astola</surname>
<given-names>JT</given-names>
</name>
(
<year>2006</year>
)
<article-title>Analysis of the meter of acoustic musical signals</article-title>
.
<source>IEEE Trans Audio Speech Lang Process</source>
<volume>14</volume>
:
<fpage>342</fpage>
<lpage>355</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-McKinney1">
<label>80</label>
<mixed-citation publication-type="journal">
<name>
<surname>McKinney</surname>
<given-names>MF</given-names>
</name>
,
<name>
<surname>Moelants</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Davies</surname>
<given-names>MEP</given-names>
</name>
,
<name>
<surname>Klapuri</surname>
<given-names>A</given-names>
</name>
(
<year>2007</year>
)
<article-title>Evaluation of audio beat tracking and music tempo extraction algorithms</article-title>
.
<source>J New Music Res</source>
<volume>36</volume>
:
<fpage>1</fpage>
<lpage>16</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Zapata1">
<label>81</label>
<mixed-citation publication-type="other">Zapata JR, Gómez E (2011) Comparative evaluation and combination of audio tempo estimation approaches. Proceedings of the Audio Engineering Society 42nd International Conference. 1–10.</mixed-citation>
</ref>
<ref id="pone.0110452-Cai1">
<label>82</label>
<mixed-citation publication-type="other">Cai Z, Ellis R, Duan Z, Lu H, Wang Y (2013) Basic Exploration of Auditory Temporal Stability (BEATS): A novel rationale, method, and visualization. Proceedings of the 14th International Conference on Music Information Retrieval. 541–546.</mixed-citation>
</ref>
<ref id="pone.0110452-BertinMahieux1">
<label>83</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bertin-Mahieux</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Ellis</surname>
<given-names>DP</given-names>
</name>
,
<name>
<surname>Whitman</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Lamere</surname>
<given-names>P</given-names>
</name>
(
<year>2011</year>
)
<article-title>The million song dataset</article-title>
.
<source>Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR</source>
<volume>2011)</volume>
:
<fpage>591</fpage>
<lpage>596</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Jehan1">
<label>84</label>
<mixed-citation publication-type="other">Jehan T (2011) Analyzer Documentation. Available:
<ext-link ext-link-type="uri" xlink:href="http://developer.echonest.com/docs/v4/_static/AnalyzeDocumentation.pdf">http://developer.echonest.com/docs/v4/_static/AnalyzeDocumentation.pdf</ext-link>
. Accessed 1 September 2013.</mixed-citation>
</ref>
<ref id="pone.0110452-Kaminskas1">
<label>85</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kaminskas</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Ricci</surname>
<given-names>F</given-names>
</name>
(
<year>2012</year>
)
<article-title>Contextual music information retrieval and recommendation: state of the art and challenges</article-title>
.
<source>Comput Sci Rev</source>
<volume>6</volume>
:
<fpage>89</fpage>
<lpage>119</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Li1">
<label>86</label>
<mixed-citation publication-type="other">Li Z, Xiang Q, Hockman J, Yang J, Yi Y,
<etal>et al</etal>
<bold>.</bold>
(2010) A music search engine for therapeutic gait training. Proceedings of the international conference on Multimedia. 627–630.</mixed-citation>
</ref>
<ref id="pone.0110452-Yi1">
<label>87</label>
<mixed-citation publication-type="other">Yi Y, Zhou Y, Wang Y (2011) A tempo-sensitive music search engine with multimodal inputs. Proceedings of the 1st international ACM workshop on Music information retrieval with user-centered and multimodal strategies. 13–18.</mixed-citation>
</ref>
<ref id="pone.0110452-Ellis2">
<label>88</label>
<mixed-citation publication-type="other">Ellis D, Bertin-Mahieux T (2011) Matlab introduction. Available:
<ext-link ext-link-type="uri" xlink:href="http://labrosa.ee.columbia.edu/millionsong/pages/matlab-introduction">http://labrosa.ee.columbia.edu/millionsong/pages/matlab-introduction</ext-link>
. Accessed 1 June 2014.</mixed-citation>
</ref>
<ref id="pone.0110452-Parzen1">
<label>89</label>
<mixed-citation publication-type="other">Parzen E (1962) On estimation of a probability density function and mode. Ann Math Stat: 1065–1076.</mixed-citation>
</ref>
<ref id="pone.0110452-Botev1">
<label>90</label>
<mixed-citation publication-type="journal">
<name>
<surname>Botev</surname>
<given-names>ZI</given-names>
</name>
,
<name>
<surname>Grotowski</surname>
<given-names>JF</given-names>
</name>
,
<name>
<surname>Kroese</surname>
<given-names>DP</given-names>
</name>
(
<year>2010</year>
)
<article-title>Kernel density estimation via diffusion</article-title>
.
<source>Ann Stat</source>
<volume>38</volume>
:
<fpage>2916</fpage>
<lpage>2957</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1214/10-AOS799">10.1214/10-AOS799</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0110452-Botev2">
<label>91</label>
<mixed-citation publication-type="other">Botev ZI (2011) Kernel Density Estimator (Matlab Central File Exchange). Kernel Density Estim Using Matlab. Available:
<ext-link ext-link-type="uri" xlink:href="http://www.mathworks.com/matlabcentral/fileexchange/file_infos/14034-kernel-density-estimator">http://www.mathworks.com/matlabcentral/fileexchange/file_infos/14034-kernel-density-estimator</ext-link>
. Accessed 9 September 2013.</mixed-citation>
</ref>
<ref id="pone.0110452-AllMusic1">
<label>92</label>
<mixed-citation publication-type="other">AllMusic (2013) Toni Braxton: Toni Braxton (1993). AllMusic Releases. Available:
<ext-link ext-link-type="uri" xlink:href="http://www.allmusic.com/album/toni-braxton-mw0000099255/releases">http://www.allmusic.com/album/toni-braxton-mw0000099255/releases</ext-link>
. Accessed 26 October 2013.</mixed-citation>
</ref>
<ref id="pone.0110452-Grieg1">
<label>93</label>
<mixed-citation publication-type="other">Grieg E (1888) Op. 46, No. 4: In the Hall of the Mountain King. Available:
<ext-link ext-link-type="uri" xlink:href="http://imslp.org/wiki/Special:ImagefromIndex/02017">http://imslp.org/wiki/Special:ImagefromIndex/02017</ext-link>
. Accessed 1 July 2014.</mixed-citation>
</ref>
<ref id="pone.0110452-Lamere1">
<label>94</label>
<mixed-citation publication-type="other">Lamere P (2011) Artist terms: What is the difference between weight and frequency? Echo Nest Dev Forums. Available:
<ext-link ext-link-type="uri" xlink:href="https://developer.echonest.com/forums/thread/353">https://developer.echonest.com/forums/thread/353</ext-link>
.</mixed-citation>
</ref>
<ref id="pone.0110452-TheEcho1">
<label>95</label>
<mixed-citation publication-type="other">The Echo Nest (2013) 7digital Partnership. Echo Nest Dev Cent. Available:
<ext-link ext-link-type="uri" xlink:href="http://developer.echonest.com/sandbox/7digital.html">http://developer.echonest.com/sandbox/7digital.html</ext-link>
. Accessed 22 October 2013.</mixed-citation>
</ref>
<ref id="pone.0110452-Ellis3">
<label>96</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ellis</surname>
<given-names>DP</given-names>
</name>
(
<year>2007</year>
)
<article-title>Beat tracking by dynamic programming</article-title>
.
<source>J New Music Res</source>
<volume>36</volume>
:
<fpage>51</fpage>
<lpage>60</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Levy1">
<label>97</label>
<mixed-citation publication-type="other">Levy M (2011) Improving Perceptual Tempo Estimation with Crowd-Sourced Annotations. ISMIR. 317–322. Available:
<ext-link ext-link-type="uri" xlink:href="http://ismir2011.ismir.net/papers/OS4-2.pdf">http://ismir2011.ismir.net/papers/OS4-2.pdf</ext-link>
. Accessed 27 October 2013.</mixed-citation>
</ref>
<ref id="pone.0110452-Chen1">
<label>98</label>
<mixed-citation publication-type="other">Chen C-W, Lee K, Wu H-H (2009) Towards a Class-Based Representation of Perceptual Tempo for Music Retrieval. International Conference on Machine Learning and Applications. 602–607.</mixed-citation>
</ref>
<ref id="pone.0110452-Peeters1">
<label>99</label>
<mixed-citation publication-type="other">Peeters G, Flocon-Cholet J (2012) Perceptual tempo estimation using GMM-regression. Proceedings of the second international ACM workshop on Music information retrieval with user-centered and multimodal strategies. 45–50.</mixed-citation>
</ref>
<ref id="pone.0110452-Wang1">
<label>100</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wang</surname>
<given-names>A</given-names>
</name>
(
<year>2006</year>
)
<article-title>The Shazam music recognition service</article-title>
.
<source>Commun ACM</source>
<volume>49</volume>
:
<fpage>44</fpage>
<lpage>48</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Jang1">
<label>101</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jang</surname>
<given-names>JS</given-names>
</name>
,
<name>
<surname>Lee</surname>
<given-names>HR</given-names>
</name>
,
<name>
<surname>Yeh</surname>
<given-names>CH</given-names>
</name>
(
<year>2001</year>
)
<article-title>Query by tapping: A new paradigm for content-based music retrieval from acoustic input</article-title>
.
<source>Advances in Multimedia Information Processing-PCM</source>
<volume>2001</volume>
:
<fpage>590</fpage>
<lpage>597</lpage>
Available:
<ext-link ext-link-type="uri" xlink:href="http://www.springerlink.com/index/B301ALVLJ1G207Q8.pdf">http://www.springerlink.com/index/B301ALVLJ1G207Q8.pdf</ext-link>
Accessed 16 August 2012..</mixed-citation>
</ref>
<ref id="pone.0110452-Zhu1">
<label>102</label>
<mixed-citation publication-type="other">Zhu S, Ellis RJ, Schlaug G, Ng YS, Wang Y (2014) Validating an iOS-based Rhythmic Auditory Cueing Evaluation (iRACE) for Parkinson’s Disease. Proceedings of the 22nd ACM International Conference on Multimedia. Orlando, FL.</mixed-citation>
</ref>
<ref id="pone.0110452-Tomic1">
<label>103</label>
<mixed-citation publication-type="journal">
<name>
<surname>Tomic</surname>
<given-names>ST</given-names>
</name>
,
<name>
<surname>Janata</surname>
<given-names>P</given-names>
</name>
(
<year>2008</year>
)
<article-title>Beyond the beat: modeling metric structure in music and performance</article-title>
.
<source>J Acoust Soc Am</source>
<volume>124</volume>
:
<fpage>4024</fpage>
<lpage>4041</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1121/1.3006382">10.1121/1.3006382</ext-link>
</comment>
<pub-id pub-id-type="pmid">19206825</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-McKinney2">
<label>104</label>
<mixed-citation publication-type="journal">
<name>
<surname>McKinney</surname>
<given-names>MF</given-names>
</name>
,
<name>
<surname>Moelants</surname>
<given-names>D</given-names>
</name>
(
<year>2006</year>
)
<article-title>Ambiguity in tempo perception: What draws listeners to different metrical levels?</article-title>
<source>Music Percept</source>
<volume>24</volume>
:
<fpage>155</fpage>
<lpage>166</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0110452-Grondin3">
<label>105</label>
<mixed-citation publication-type="journal">
<name>
<surname>Grondin</surname>
<given-names>S</given-names>
</name>
(
<year>2010</year>
)
<article-title>Timing and time perception: A review of recent behavioral and neuroscience findings and theoretical directions</article-title>
.
<source>Atten Percept Psychophys</source>
<volume>72</volume>
:
<fpage>561</fpage>
<lpage>582</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/APP.72.3.561">10.3758/APP.72.3.561</ext-link>
</comment>
<pub-id pub-id-type="pmid">20348562</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0110452-Patel1">
<label>106</label>
<mixed-citation publication-type="journal">
<name>
<surname>Patel</surname>
<given-names>AD</given-names>
</name>
,
<name>
<surname>Iversen</surname>
<given-names>JR</given-names>
</name>
(
<year>2014</year>
)
<article-title>The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis</article-title>
.
<source>Front Syst Neurosci</source>
<volume>8</volume>
:
<fpage>57</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3389/fnsys.2014.00057">10.3389/fnsys.2014.00057</ext-link>
</comment>
<pub-id pub-id-type="pmid">24860439</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Psychologie/explor/DanceTherParkinsonV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000017 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000017 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Psychologie
   |area=    DanceTherParkinsonV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:4254286
   |texte=   Quantifying Auditory Temporal Stability in a Large Database of Recorded Music
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:25469636" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a DanceTherParkinsonV1 

Wicri

This area was generated with Dilib version V0.6.35.
Data generation: Sun Aug 9 17:42:30 2020. Site generation: Mon Feb 12 22:53:51 2024