Serveur d'exploration sur Mozart

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

(Dis-)Harmony in movement: effects of musical dissonance on movement timing and form

Identifieur interne : 000072 ( Pmc/Checkpoint ); précédent : 000071; suivant : 000073

(Dis-)Harmony in movement: effects of musical dissonance on movement timing and form

Auteurs : Naeem Komeilipoor [Italie, Pays-Bas] ; Matthew W. M. Rodger ; Cathy M. Craig ; Paola Cesari [Italie]

Source :

RBID : PMC:4369290

Abstract

While the origins of consonance and dissonance in terms of acoustics, psychoacoustics and physiology have been debated for centuries, their plausible effects on movement synchronization have largely been ignored. The present study aimed to address this by investigating whether, and if so how, consonant/dissonant pitch intervals affect the spatiotemporal properties of regular reciprocal aiming movements. We compared movements synchronized either to consonant or to dissonant sounds and showed that they were differentially influenced by the degree of consonance of the sound presented. Interestingly, the difference was present after the sound stimulus was removed. In this case, the performance measured after consonant sound exposure was found to be more stable and accurate, with a higher percentage of information/movement coupling (tau coupling) and a higher degree of movement circularity when compared to performance measured after the exposure to dissonant sounds. We infer that the neural resonance representing consonant tones leads to finer perception/action coupling which in turn may help explain the prevailing preference for these types of tones.


Url:
DOI: 10.1007/s00221-015-4233-9
PubMed: 25725774
PubMed Central: 4369290


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4369290

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">(Dis-)Harmony in movement: effects of musical dissonance on movement timing and form</title>
<author>
<name sortKey="Komeilipoor, Naeem" sort="Komeilipoor, Naeem" uniqKey="Komeilipoor N" first="Naeem" last="Komeilipoor">Naeem Komeilipoor</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff1">Department of Neurological and Movement Sciences, University of Verona, Via Casorati 43, 37131 Verona, Italy</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea>Department of Neurological and Movement Sciences, University of Verona, Via Casorati 43, 37131 Verona</wicri:regionArea>
<wicri:noRegion>37131 Verona</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">MOVE Research Institute, VU University Amsterdam, 1081 BT Amsterdam, The Netherlands</nlm:aff>
<country xml:lang="fr">Pays-Bas</country>
<wicri:regionArea>MOVE Research Institute, VU University Amsterdam, 1081 BT Amsterdam</wicri:regionArea>
<wicri:noRegion>1081 BT Amsterdam</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Rodger, Matthew W M" sort="Rodger, Matthew W M" uniqKey="Rodger M" first="Matthew W. M." last="Rodger">Matthew W. M. Rodger</name>
<affiliation>
<nlm:aff id="Aff3">School of Psychology, Queen’s University Belfast, David Keir Building, 18-30 Malone Road, Belfast, BT9 5BN UK</nlm:aff>
<wicri:noCountry code="subfield">BT9 5BN UK</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Craig, Cathy M" sort="Craig, Cathy M" uniqKey="Craig C" first="Cathy M." last="Craig">Cathy M. Craig</name>
<affiliation>
<nlm:aff id="Aff3">School of Psychology, Queen’s University Belfast, David Keir Building, 18-30 Malone Road, Belfast, BT9 5BN UK</nlm:aff>
<wicri:noCountry code="subfield">BT9 5BN UK</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Cesari, Paola" sort="Cesari, Paola" uniqKey="Cesari P" first="Paola" last="Cesari">Paola Cesari</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff1">Department of Neurological and Movement Sciences, University of Verona, Via Casorati 43, 37131 Verona, Italy</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea>Department of Neurological and Movement Sciences, University of Verona, Via Casorati 43, 37131 Verona</wicri:regionArea>
<wicri:noRegion>37131 Verona</wicri:noRegion>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25725774</idno>
<idno type="pmc">4369290</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4369290</idno>
<idno type="RBID">PMC:4369290</idno>
<idno type="doi">10.1007/s00221-015-4233-9</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000088</idno>
<idno type="wicri:Area/Pmc/Curation">000088</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000072</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">(Dis-)Harmony in movement: effects of musical dissonance on movement timing and form</title>
<author>
<name sortKey="Komeilipoor, Naeem" sort="Komeilipoor, Naeem" uniqKey="Komeilipoor N" first="Naeem" last="Komeilipoor">Naeem Komeilipoor</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff1">Department of Neurological and Movement Sciences, University of Verona, Via Casorati 43, 37131 Verona, Italy</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea>Department of Neurological and Movement Sciences, University of Verona, Via Casorati 43, 37131 Verona</wicri:regionArea>
<wicri:noRegion>37131 Verona</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">MOVE Research Institute, VU University Amsterdam, 1081 BT Amsterdam, The Netherlands</nlm:aff>
<country xml:lang="fr">Pays-Bas</country>
<wicri:regionArea>MOVE Research Institute, VU University Amsterdam, 1081 BT Amsterdam</wicri:regionArea>
<wicri:noRegion>1081 BT Amsterdam</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Rodger, Matthew W M" sort="Rodger, Matthew W M" uniqKey="Rodger M" first="Matthew W. M." last="Rodger">Matthew W. M. Rodger</name>
<affiliation>
<nlm:aff id="Aff3">School of Psychology, Queen’s University Belfast, David Keir Building, 18-30 Malone Road, Belfast, BT9 5BN UK</nlm:aff>
<wicri:noCountry code="subfield">BT9 5BN UK</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Craig, Cathy M" sort="Craig, Cathy M" uniqKey="Craig C" first="Cathy M." last="Craig">Cathy M. Craig</name>
<affiliation>
<nlm:aff id="Aff3">School of Psychology, Queen’s University Belfast, David Keir Building, 18-30 Malone Road, Belfast, BT9 5BN UK</nlm:aff>
<wicri:noCountry code="subfield">BT9 5BN UK</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Cesari, Paola" sort="Cesari, Paola" uniqKey="Cesari P" first="Paola" last="Cesari">Paola Cesari</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff1">Department of Neurological and Movement Sciences, University of Verona, Via Casorati 43, 37131 Verona, Italy</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea>Department of Neurological and Movement Sciences, University of Verona, Via Casorati 43, 37131 Verona</wicri:regionArea>
<wicri:noRegion>37131 Verona</wicri:noRegion>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Experimental Brain Research</title>
<idno type="ISSN">0014-4819</idno>
<idno type="e-ISSN">1432-1106</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>While the origins of consonance and dissonance in terms of acoustics, psychoacoustics and physiology have been debated for centuries, their plausible effects on movement synchronization have largely been ignored. The present study aimed to address this by investigating whether, and if so how, consonant/dissonant pitch intervals affect the spatiotemporal properties of regular reciprocal aiming movements. We compared movements synchronized either to consonant or to dissonant sounds and showed that they were differentially influenced by the degree of consonance of the sound presented. Interestingly, the difference was present after the sound stimulus was removed. In this case, the performance measured after consonant sound exposure was found to be more stable and accurate, with a higher percentage of information/movement coupling (tau coupling) and a higher degree of movement circularity when compared to performance measured after the exposure to dissonant sounds. We infer that the neural resonance representing consonant tones leads to finer perception/action coupling which in turn may help explain the prevailing preference for these types of tones.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Bidelman, Gm" uniqKey="Bidelman G">GM Bidelman</name>
</author>
<author>
<name sortKey="Heinz, Mg" uniqKey="Heinz M">MG Heinz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bidelman, Gm" uniqKey="Bidelman G">GM Bidelman</name>
</author>
<author>
<name sortKey="Krishnan, A" uniqKey="Krishnan A">A Krishnan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bidelman, Gm" uniqKey="Bidelman G">GM Bidelman</name>
</author>
<author>
<name sortKey="Krishnan, A" uniqKey="Krishnan A">A Krishnan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bie Kiewicz, Mmn" uniqKey="Bie Kiewicz M">MMN Bieńkiewicz</name>
</author>
<author>
<name sortKey="Rodger, Mwm" uniqKey="Rodger M">MWM Rodger</name>
</author>
<author>
<name sortKey="Craig, Cm" uniqKey="Craig C">CM Craig</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bie Kiewicz, Mmn" uniqKey="Bie Kiewicz M">MMN Bieńkiewicz</name>
</author>
<author>
<name sortKey="Young, W" uniqKey="Young W">W Young</name>
</author>
<author>
<name sortKey="Craig, Cm" uniqKey="Craig C">CM Craig</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E Brattico</name>
</author>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M Tervaniemi</name>
</author>
<author>
<name sortKey="N T Nen, R" uniqKey="N T Nen R">R Näätänen</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Claassen, Do" uniqKey="Claassen D">DO Claassen</name>
</author>
<author>
<name sortKey="Jones, Crg" uniqKey="Jones C">CRG Jones</name>
</author>
<author>
<name sortKey="Yu, M" uniqKey="Yu M">M Yu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Craig, C" uniqKey="Craig C">C Craig</name>
</author>
<author>
<name sortKey="Pepping, Gj" uniqKey="Pepping G">GJ Pepping</name>
</author>
<author>
<name sortKey="Grealy, M" uniqKey="Grealy M">M Grealy</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Droit Volet, S" uniqKey="Droit Volet S">S Droit-Volet</name>
</author>
<author>
<name sortKey="Ramos, D" uniqKey="Ramos D">D Ramos</name>
</author>
<author>
<name sortKey="Bueno, Jlo" uniqKey="Bueno J">JLO Bueno</name>
</author>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E Bigand</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Farbood, Mm" uniqKey="Farbood M">MM Farbood</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fishman, Yi" uniqKey="Fishman Y">YI Fishman</name>
</author>
<author>
<name sortKey="Volkov, Io" uniqKey="Volkov I">IO Volkov</name>
</author>
<author>
<name sortKey="Noh, Md" uniqKey="Noh M">MD Noh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Foss, Ah" uniqKey="Foss A">AH Foss</name>
</author>
<author>
<name sortKey="Altschuler, El" uniqKey="Altschuler E">EL Altschuler</name>
</author>
<author>
<name sortKey="James, Kh" uniqKey="James K">KH James</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fritz, T" uniqKey="Fritz T">T Fritz</name>
</author>
<author>
<name sortKey="Jentschke, S" uniqKey="Jentschke S">S Jentschke</name>
</author>
<author>
<name sortKey="Gosselin, N" uniqKey="Gosselin N">N Gosselin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gaver, Ww" uniqKey="Gaver W">WW Gaver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Helmholtz, H" uniqKey="Helmholtz H">H Helmholtz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Itoh, K" uniqKey="Itoh K">K Itoh</name>
</author>
<author>
<name sortKey="Suwazono, S" uniqKey="Suwazono S">S Suwazono</name>
</author>
<author>
<name sortKey="Nakada, T" uniqKey="Nakada T">T Nakada</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ivry, R" uniqKey="Ivry R">R Ivry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kaiser, R" uniqKey="Kaiser R">R Kaiser</name>
</author>
<author>
<name sortKey="Keller, Pe" uniqKey="Keller P">PE Keller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
<author>
<name sortKey="Fritz, T" uniqKey="Fritz T">T Fritz</name>
</author>
<author>
<name sortKey="Cramon, V" uniqKey="Cramon V">V Cramon</name>
</author>
<author>
<name sortKey="Dy" uniqKey="Dy">DY</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Krohn, Ki" uniqKey="Krohn K">KI Krohn</name>
</author>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E Brattico</name>
</author>
<author>
<name sortKey="V Lim Ki, V" uniqKey="V Lim Ki V">V Välimäki</name>
</author>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M Tervaniemi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Krumhansl, Cl" uniqKey="Krumhansl C">CL Krumhansl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, D" uniqKey="Lee D">D Lee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lehne, M" uniqKey="Lehne M">M Lehne</name>
</author>
<author>
<name sortKey="Rohrmeier, M" uniqKey="Rohrmeier M">M Rohrmeier</name>
</author>
<author>
<name sortKey="Gollmann, D" uniqKey="Gollmann D">D Gollmann</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lehne, M" uniqKey="Lehne M">M Lehne</name>
</author>
<author>
<name sortKey="Rohrmeier, M" uniqKey="Rohrmeier M">M Rohrmeier</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lim, I" uniqKey="Lim I">I Lim</name>
</author>
<author>
<name sortKey="Van Wegen, E" uniqKey="Van Wegen E">E van Wegen</name>
</author>
<author>
<name sortKey="De Goede, C" uniqKey="De Goede C">C de Goede</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Masataka, N" uniqKey="Masataka N">N Masataka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maslennikova, Av" uniqKey="Maslennikova A">AV Maslennikova</name>
</author>
<author>
<name sortKey="Varlamov, Aa" uniqKey="Varlamov A">AA Varlamov</name>
</author>
<author>
<name sortKey="Strelets, Vb" uniqKey="Strelets V">VB Strelets</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Minati, L" uniqKey="Minati L">L Minati</name>
</author>
<author>
<name sortKey="Rosazza, C" uniqKey="Rosazza C">C Rosazza</name>
</author>
<author>
<name sortKey="D Ncerti, L" uniqKey="D Ncerti L">L D’Incerti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Phillips, Dp" uniqKey="Phillips D">DP Phillips</name>
</author>
<author>
<name sortKey="Hall, Se" uniqKey="Hall S">SE Hall</name>
</author>
<author>
<name sortKey="Boehnke, Se" uniqKey="Boehnke S">SE Boehnke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Repp, Bh" uniqKey="Repp B">BH Repp</name>
</author>
<author>
<name sortKey="Su, Yh" uniqKey="Su Y">YH Su</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rodger, Mwm" uniqKey="Rodger M">MWM Rodger</name>
</author>
<author>
<name sortKey="Craig, Cm" uniqKey="Craig C">CM Craig</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rodger, Mwm" uniqKey="Rodger M">MWM Rodger</name>
</author>
<author>
<name sortKey="Young, Wr" uniqKey="Young W">WR Young</name>
</author>
<author>
<name sortKey="Craig, Cm" uniqKey="Craig C">CM Craig</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sammler, D" uniqKey="Sammler D">D Sammler</name>
</author>
<author>
<name sortKey="Grigutsch, M" uniqKey="Grigutsch M">M Grigutsch</name>
</author>
<author>
<name sortKey="Fritz, T" uniqKey="Fritz T">T Fritz</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Satoh, M" uniqKey="Satoh M">M Satoh</name>
</author>
<author>
<name sortKey="Kuzuhara, S" uniqKey="Kuzuhara S">S Kuzuhara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schwartz, Da" uniqKey="Schwartz D">DA Schwartz</name>
</author>
<author>
<name sortKey="Howe, Cq" uniqKey="Howe C">CQ Howe</name>
</author>
<author>
<name sortKey="Purves, D" uniqKey="Purves D">D Purves</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sievers, B" uniqKey="Sievers B">B Sievers</name>
</author>
<author>
<name sortKey="Polansky, L" uniqKey="Polansky L">L Polansky</name>
</author>
<author>
<name sortKey="Casey, M" uniqKey="Casey M">M Casey</name>
</author>
<author>
<name sortKey="Wheatley, T" uniqKey="Wheatley T">T Wheatley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sorce, R" uniqKey="Sorce R">R Sorce</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Styns, F" uniqKey="Styns F">F Styns</name>
</author>
<author>
<name sortKey="Van Noorden, L" uniqKey="Van Noorden L">L van Noorden</name>
</author>
<author>
<name sortKey="Moelants, D" uniqKey="Moelants D">D Moelants</name>
</author>
<author>
<name sortKey="Leman, M" uniqKey="Leman M">M Leman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thaut, M" uniqKey="Thaut M">M Thaut</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thaut, Mh" uniqKey="Thaut M">MH Thaut</name>
</author>
<author>
<name sortKey="Abiru, M" uniqKey="Abiru M">M Abiru</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tierney, A" uniqKey="Tierney A">A Tierney</name>
</author>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N Kraus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B Tillmann</name>
</author>
<author>
<name sortKey="Janata, P" uniqKey="Janata P">P Janata</name>
</author>
<author>
<name sortKey="Bharucha, Jj" uniqKey="Bharucha J">JJ Bharucha</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trainor, L" uniqKey="Trainor L">L Trainor</name>
</author>
<author>
<name sortKey="Tsang, C" uniqKey="Tsang C">C Tsang</name>
</author>
<author>
<name sortKey="Cheung, V" uniqKey="Cheung V">V Cheung</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tramo, Mj" uniqKey="Tramo M">MJ Tramo</name>
</author>
<author>
<name sortKey="Cariani, Pa" uniqKey="Cariani P">PA Cariani</name>
</author>
<author>
<name sortKey="Delgutte, B" uniqKey="Delgutte B">B Delgutte</name>
</author>
<author>
<name sortKey="Braida, Ld" uniqKey="Braida L">LD Braida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vos, Pg" uniqKey="Vos P">PG Vos</name>
</author>
<author>
<name sortKey="Troost, Jimm" uniqKey="Troost J">JIMM Troost</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wing, Am" uniqKey="Wing A">AM Wing</name>
</author>
<author>
<name sortKey="Kristofferson, Ab" uniqKey="Kristofferson A">AB Kristofferson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wittwer, Je" uniqKey="Wittwer J">JE Wittwer</name>
</author>
<author>
<name sortKey="Webster, Ke" uniqKey="Webster K">KE Webster</name>
</author>
<author>
<name sortKey="Hill, K" uniqKey="Hill K">K Hill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Young, Wr" uniqKey="Young W">WR Young</name>
</author>
<author>
<name sortKey="Rodger, Mwm" uniqKey="Rodger M">MWM Rodger</name>
</author>
<author>
<name sortKey="Craig, Cm" uniqKey="Craig C">CM Craig</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Young, Wr" uniqKey="Young W">WR Young</name>
</author>
<author>
<name sortKey="Rodger, Mwm" uniqKey="Rodger M">MWM Rodger</name>
</author>
<author>
<name sortKey="Craig, Cm" uniqKey="Craig C">CM Craig</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zentner, M" uniqKey="Zentner M">M Zentner</name>
</author>
<author>
<name sortKey="Eerola, T" uniqKey="Eerola T">T Eerola</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zentner, Mr" uniqKey="Zentner M">MR Zentner</name>
</author>
<author>
<name sortKey="Kagan, J" uniqKey="Kagan J">J Kagan</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Exp Brain Res</journal-id>
<journal-id journal-id-type="iso-abbrev">Exp Brain Res</journal-id>
<journal-title-group>
<journal-title>Experimental Brain Research</journal-title>
</journal-title-group>
<issn pub-type="ppub">0014-4819</issn>
<issn pub-type="epub">1432-1106</issn>
<publisher>
<publisher-name>Springer Berlin Heidelberg</publisher-name>
<publisher-loc>Berlin/Heidelberg</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25725774</article-id>
<article-id pub-id-type="pmc">4369290</article-id>
<article-id pub-id-type="publisher-id">4233</article-id>
<article-id pub-id-type="doi">10.1007/s00221-015-4233-9</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>(Dis-)Harmony in movement: effects of musical dissonance on movement timing and form</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Komeilipoor</surname>
<given-names>Naeem</given-names>
</name>
<address>
<phone>+39 045 8425139</phone>
<email>naeem.komeilipoor@univr.it</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
<xref ref-type="aff" rid="Aff2"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Rodger</surname>
<given-names>Matthew W. M.</given-names>
</name>
<xref ref-type="aff" rid="Aff3"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Craig</surname>
<given-names>Cathy M.</given-names>
</name>
<xref ref-type="aff" rid="Aff3"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Cesari</surname>
<given-names>Paola</given-names>
</name>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<aff id="Aff1">
<label></label>
Department of Neurological and Movement Sciences, University of Verona, Via Casorati 43, 37131 Verona, Italy</aff>
<aff id="Aff2">
<label></label>
MOVE Research Institute, VU University Amsterdam, 1081 BT Amsterdam, The Netherlands</aff>
<aff id="Aff3">
<label></label>
School of Psychology, Queen’s University Belfast, David Keir Building, 18-30 Malone Road, Belfast, BT9 5BN UK</aff>
</contrib-group>
<pub-date pub-type="epub">
<day>1</day>
<month>3</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>1</day>
<month>3</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="ppub">
<year>2015</year>
</pub-date>
<volume>233</volume>
<issue>5</issue>
<fpage>1585</fpage>
<lpage>1595</lpage>
<history>
<date date-type="received">
<day>14</day>
<month>10</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>2</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>© The Author(s) 2015</copyright-statement>
<license license-type="OpenAccess">
<license-p>
<bold>Open Access</bold>
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.</license-p>
</license>
</permissions>
<abstract id="Abs1">
<p>While the origins of consonance and dissonance in terms of acoustics, psychoacoustics and physiology have been debated for centuries, their plausible effects on movement synchronization have largely been ignored. The present study aimed to address this by investigating whether, and if so how, consonant/dissonant pitch intervals affect the spatiotemporal properties of regular reciprocal aiming movements. We compared movements synchronized either to consonant or to dissonant sounds and showed that they were differentially influenced by the degree of consonance of the sound presented. Interestingly, the difference was present after the sound stimulus was removed. In this case, the performance measured after consonant sound exposure was found to be more stable and accurate, with a higher percentage of information/movement coupling (tau coupling) and a higher degree of movement circularity when compared to performance measured after the exposure to dissonant sounds. We infer that the neural resonance representing consonant tones leads to finer perception/action coupling which in turn may help explain the prevailing preference for these types of tones.</p>
</abstract>
<kwd-group xml:lang="en">
<title>Keywords</title>
<kwd>Consonance dissonance sounds</kwd>
<kwd>Musical pitch intervals</kwd>
<kwd>Sensorimotor synchronization</kwd>
</kwd-group>
<custom-meta-group>
<custom-meta>
<meta-name>issue-copyright-statement</meta-name>
<meta-value>© Springer-Verlag Berlin Heidelberg 2015</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="Sec1" sec-type="intro">
<title>Introduction</title>
<p>We interact with our environment through movement; with the way we move being influenced by many different types of perceptual information. For instance, environmental sounds carry an ecological significance that allows us to move in the direction of an object, detect the presence of objects, interact with others and even interpret events using sound alone (Gaver
<xref ref-type="bibr" rid="CR16">1993</xref>
; Carello et al.
<xref ref-type="bibr" rid="CR7">2005</xref>
). One of the key ways in which humans naturally interact with their auditory environment is when they synchronize their movements to regular patterns of sound (e.g., dancing to a beat). Indeed, to be able to synchronize movements to sounds, an activity humans are very skilled at, the nervous system must pick up information from the auditory perceptual stream about the time until the next beat sounds and use this information to prospectively guide the generation of consecutive actions (Craig et al.
<xref ref-type="bibr" rid="CR9">2005</xref>
). Given the absence of a continuous source of external temporal information to guide the action extrinsically, the nervous system must create its own source of dynamic temporal information (Tau-G, Craig et al.
<xref ref-type="bibr" rid="CR9">2005</xref>
). It has already been shown that the structure of sound events (discrete vs. continuous) can affect the processes by which movements are timed to sounds, even if the interval durations are the same (Rodger and Craig
<xref ref-type="bibr" rid="CR35">2011</xref>
,
<xref ref-type="bibr" rid="CR36">2013</xref>
). Although synchronization of body movement to the perceived musical tempo has been widely studied (see Repp and Su
<xref ref-type="bibr" rid="CR34">2013</xref>
, for a review), the effects of other aspects of auditory stimuli on movement–sound synchronization, such as musical pitch relationships, have largely been neglected.</p>
<p>Synchronizing movements with musical rhythms is indeed one of the most natural and instinctive ways in which humans interact with their auditory environment. The inextricable link between sound and movement forms the basis of music and dance performance. Interestingly, it has been shown that music and movements share similar structures and present common cross-cultural expressive codes (Sievers et al.
<xref ref-type="bibr" rid="CR41">2013</xref>
). In the same vein, the evaluation of the emotional content of observed biological motion (point-light displays of human motion) has been shown to be strongly influenced by the presence of accompanying music (Kaiser and Keller
<xref ref-type="bibr" rid="CR20">2011</xref>
). Already from the first month of life, infants move their body more naturally under the presence of musical rhythm than speech rhythm (Zentner and Eerola
<xref ref-type="bibr" rid="CR55">2010</xref>
), being able not only to synchronize correctly their movements with the different musical tempo but also being selectively sensitive to melodies presenting different pitch structures (Zentner and Kagan
<xref ref-type="bibr" rid="CR56">1998</xref>
). In a different scenario, human adults have been shown to use a different walking strategy under the guidance of music than under a metronome beat (Styns et al.
<xref ref-type="bibr" rid="CR43">2007</xref>
; Wittwer et al.
<xref ref-type="bibr" rid="CR52">2013</xref>
). A number of studies have revealed that musical rhythm can even enhance motor performance in Parkinson’s disease (PD) (Thaut and Abiru
<xref ref-type="bibr" rid="CR45">2010</xref>
; Satoh and Kuzuhara
<xref ref-type="bibr" rid="CR39">2008</xref>
; Lim et al.
<xref ref-type="bibr" rid="CR28">2005</xref>
). Moreover, using a finger-tapping paradigm, it has been shown that synchronization error was significantly less when tapping with music cues than metronome ones (Thaut
<xref ref-type="bibr" rid="CR44">1997</xref>
). What emerges from these studies is that in addition to the timing cues music conveys, other properties also help guide the coordination of movement. Hence, investigating whether and how non-temporal cues, such as pitch and harmony, influence movement synchronization is crucial for understanding the inseparable connection between action and perception.</p>
<p>Consonant and dissonant pitch relationships in music provide the basis of melody and harmony. It has been recognized since antiquity that musical chords are either consonant (sounding pleasant or stable) or dissonant (sounding unpleasant or instable). Although composers make use of both intervals to evoke diverse feelings of “tension” and “resolution,” consonant intervals in tonal music occur more often than the dissonant ones (Vos and Troost
<xref ref-type="bibr" rid="CR50">1989</xref>
). Consonant intervals are more preferred also by human infants (Trainor et al.
<xref ref-type="bibr" rid="CR48">2002</xref>
; Zentner and Kagan
<xref ref-type="bibr" rid="CR56">1998</xref>
; Masataka
<xref ref-type="bibr" rid="CR29">2006</xref>
). Remarkably, the preference of consonance over dissonance seems to be cross-cultural, as it has been reported among native African populations who did not have prior experience with Western music (Fritz et al.
<xref ref-type="bibr" rid="CR15">2009</xref>
). Moreover, Schwartz et al. (
<xref ref-type="bibr" rid="CR40">2003</xref>
) found a correlation between musical consonance rankings and the probability distribution of amplitude–frequency of human utterances, suggesting that the preference for musical pitch intervals is based on similar physical principals that rule human vocalization (Schwartz et al.
<xref ref-type="bibr" rid="CR40">2003</xref>
). Overall, it seems that some characteristics of musical pitch interval perception might be innate and represent a by-product of fundamental biological properties.</p>
<p>While we can identify differences in the preference and occurrence of consonant and dissonant pitch intervals in nature, it is also possible to define these differences at a mathematical or physical level. The Greek scholar Pythagoras defined the occurrence of consonance as being when the length of string segments forms simple integer ratios (e.g., 3:2, 2:1) with dissonant intervals being when string length ratios are more complex (e.g., 16:15, 243:128). Hermann von Helmholtz argued that consonance occurs not only as a consequence of simple frequency ratio relationships, but also as a result of the interference between overtones of slightly different frequencies—a phenomenon known as
<italic>beating</italic>
. When the harmonics of complex tones are close, the beating gets faster and forms an unpleasant sensation called roughness (Helmholtz
<xref ref-type="bibr" rid="CR17">1954</xref>
)</p>
<p>A number of studies have attempted to investigate the neuronal substrates underlying the perception of consonance and dissonance. Functional magnetic resonance imaging (fMRI) has revealed differences in activation in different brain areas such as the cingulate and frontal gyrus, and the premotor cortex while listening to dissonant over consonant chords (Tillmann et al.
<xref ref-type="bibr" rid="CR47">2003</xref>
; Foss et al.
<xref ref-type="bibr" rid="CR14">2007</xref>
; Minati et al.
<xref ref-type="bibr" rid="CR32">2009</xref>
). A recent EEG study provided evidence that consonance and dissonance activate neural regions associated with pleasant and unpleasant emotional states, respectively (Maslennikova et al.
<xref ref-type="bibr" rid="CR30">2013</xref>
). Other studies have investigated the neural correlates of emotional responses to consonant (pleasant) and dissonant (unpleasant) music (for review, see Koelsch et al.
<xref ref-type="bibr" rid="CR22">2006</xref>
; Sammler et al.
<xref ref-type="bibr" rid="CR38">2007</xref>
). Studies of event-related potentials (ERPs) revealed that such modulations in cortical activity were correlated with the hierarchical ordering of musical pitch (i.e., the degree of consonance or dissonance of different tone combinations in a musical scale) (Brattico et al.
<xref ref-type="bibr" rid="CR6">2006</xref>
; Krohn et al.
<xref ref-type="bibr" rid="CR23">2007</xref>
; Itoh et al.
<xref ref-type="bibr" rid="CR18">2010</xref>
). In a recent study, Bidelman and Krishnan (
<xref ref-type="bibr" rid="CR2">2009</xref>
) showed that consonant intervals yield more robust and synchronous phase locking of auditory brainstem responses, that is, the mechanism by which the auditory nerves fire at or near the same phase angle of a sound wave. Importantly, this result is in accord with pervious animal studies revealing a correlation between the perceived consonance of musical pitch relationships and the magnitude of phase-locked activity in the primary auditory cortex (Fishman et al.
<xref ref-type="bibr" rid="CR13">2001</xref>
), the auditory nerve (Tramo et al.
<xref ref-type="bibr" rid="CR49">2001</xref>
) and the midbrain (Mckinney et al.
<xref ref-type="bibr" rid="CR31">2001</xref>
). Together, these studies suggest compelling evidence that musical scale pitch hierarchies are preserved at both cortical and subcortical levels, which indicates that the auditory system is tuned in to the biological relevance of consonant versus dissonant sounds. Importantly, Tierney and Kraus (
<xref ref-type="bibr" rid="CR46">2013</xref>
) demonstrated that the ability to synchronize to a beat relates to the phase-locking response in the auditory brainstem; less auditory–motor synchronization variability when tapping to a beat is associated with more consistent responses in the auditory brainstem. Hence, a more stable neural representation of consonant intervals compared with dissonant ones could lead to a more stable motor output even during the continuation phase where no external pacing stimulus is present. The latter might happen due to different emotional states evoked by sounds during the synchronization phase, which might last during the continuation phase and in turn affect the types of movements produced.</p>
<p>Given the suggested ecological relevance of consonance/dissonance, it is possible that the harmonic structure of sounds may affect the spatiotemporal characteristics of movements when using such sounds to guide timed actions. Our study addresses this issue in a synchronization–continuation paradigm, in which participants were asked to synchronize their movements with auditory tones and then to maintain the same pattern of movements in the absence of the auditory stimuli. The pair of tones delivered differed in the degree of dissonance (from highly consonant (C & G) to highly dissonant (C & C#). By measuring timing accuracy and variability, along with parameters defining the movement trajectory form, we assessed the effects of auditory consonance/dissonance on participants’ movements.</p>
<p>Finally, we tested the effects of sound on movement by applying a model derived from tau-coupling theory (Craig et al.
<xref ref-type="bibr" rid="CR9">2005</xref>
), which describes how the prospective temporal information generated within the central nervous system (an intrinsic tau-guide) can facilitate the prospective control of movement for synchronizing movement to beats. The intrinsic tau-guide is developed based on general tau theory (Lee
<xref ref-type="bibr" rid="CR25">1998</xref>
), which aims to describe the control of intrinsically paced movements. In terms of sensorimotor synchronization, Craig et al. (
<xref ref-type="bibr" rid="CR9">2005</xref>
) postulated that during the synchronization of movement with beats the inter-onset intervals are represented in the form of a “tau-guide,” a dynamic neural representation that prospectively informs individuals about the time remaining to the arrival of the next beat. They reported that individuals accomplish the task by coupling their movement onto the tau-guide where the tau of the movement gap (
<italic>τ</italic>
<sub>m</sub>
—the movement gap divided by its closure rate) is kept in constant ratio to the tau-guide (
<italic>τ</italic>
<sub>g</sub>
—the time-to-sounding of the next beat). Hence, the acoustic information of a metronome’s beat sets the parameters of the intrinsic tau-guide in the nervous system that consequently guides the spatiotemporal unfolding of the synchronization movement. What is not clear yet is whether the structure of an auditory event can differentially affect the tau-coupling procedure and consequently result in different movement timing processes.</p>
<p>Our overall aim was to test whether and how consonant/dissonant pitch intervals affect the spatiotemporal properties of regular reciprocal aiming movements. We hypothesized that (1) both the spatial and temporal dynamics of coordinated movement would differ when synchronizing movement to consonant compared with dissonant tones and (2) such differences in movement will be maintained when the stimuli are removed.</p>
</sec>
<sec id="Sec2" sec-type="methods">
<title>Methods</title>
<sec id="Sec3">
<title>Participants</title>
<p>Thirteen healthy (7 females and 6 males), right-handed adults with no musical training (assessed via a questionnaire) volunteered to participate in the experiment. The mean age was 29.4 years (range 20–38 years).</p>
</sec>
<sec id="Sec4">
<title>Materials and apparatus</title>
<p>A set of four synthesized piano musical dyads (i.e., two-note musical intervals) were constructed as stimuli and presented to participants through noise-isolating headphones at a constant intensity (68 dB SPL). The stimuli consisted of two consonant intervals (perfect fourth: 4:3, perfect fifth: 3:2) and two dissonant intervals (minor second: 16:15, major seventh: 15:8) played back in an isochronous sequence where the inter-onset interval was 0.6 s. Sounds were played for the same duration (0.6 s) with a decreasing amplitude envelope (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
b). Sounds were created with Guitar Pro 6 software (
<ext-link ext-link-type="uri" xlink:href="http://www.guitar-pro.com/">www.guitar-pro.com/</ext-link>
) (music notation, waveform, frequency spectra and spectrogram for each sound can be seen in Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
). The stimuli were delivered using a Pure Data (
<ext-link ext-link-type="uri" xlink:href="http://puredata.info/">http://puredata.info/</ext-link>
) patch.
<fig id="Fig1">
<label>Fig. 1</label>
<caption>
<p>
<bold>a</bold>
Musical notation.
<bold>b</bold>
Waveform.
<bold>c</bold>
Frequency spectra.
<bold>d</bold>
Spectrograms for the four chords (two consonant and two dissonant) used in the study</p>
</caption>
<graphic xlink:href="221_2015_4233_Fig1_HTML" id="MO1"></graphic>
</fig>
</p>
<p>Participants were asked to sit in front of the table so that the sagittal plane line of their right arm bisected the horizontal plane midway between the two targets. The experimental setup is shown in Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
. Targets were printed on laminated A4 paper. The targets were two 5 × 21 cm black-colored blocks, separated by a white gap of 20 cm. Participants wore a thimble on their right index finger with a mounted reflective marker. Motion data were recorded using 3 Qualisys Oqus 300 Motion Capture cameras connected to a Dell PC running QTM software, sampling at 500 Hz with a spatial accuracy of ±0.1 mm. Before the start of each trial, the coordinates of the target zones were recorded so that the positional data could be calibrated with respect to target position. Motion capture data were synchronized with the sounds presented using the Open Sound Control (
<ext-link ext-link-type="uri" xlink:href="http://opensoundcontrol.org/">http://opensoundcontrol.org/</ext-link>
) protocol.
<fig id="Fig2">
<label>Fig. 2</label>
<caption>
<p>Illustration of the experimental setup where the two
<italic>black rectangles</italic>
represent the target zones. The duration of the inter-stimulus interval is represented as the temporal gap on the diagram</p>
</caption>
<graphic xlink:href="221_2015_4233_Fig2_HTML" id="MO2"></graphic>
</fig>
</p>
</sec>
<sec id="Sec5">
<title>Procedure</title>
<p>For all trials, participants were given specific instruction to slide their right index fingers between the two target zones in such a way that the stopping of movement in the target zone coincided with the sounding of the metronome beats (synchronization phase). Hence, both the beginning and the end of each movement were defined as the moment when the hand stopped in the target zones. They were also asked to continue moving between the target zones after the metronome had stopped sounding, maintaining the same interval duration between each movement (the continuation phase), until they were instructed to stop moving by the experimenter (see Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
). At the start of each block, participants were presented with 10 repetitions of each sound type so that they could become familiar with the interval duration. Each participant took part in a single session comprised of five blocks of four conditions (four sounds:
<italic>perfect fifth, perfect fourth, major seventh</italic>
and
<italic>minor second</italic>
). For each condition, in both the synchronization and the continuation phases, 30 interceptive movements to the targets were recorded (15 to the left side and 15 to the right side). The presentation of the experimental conditions was counterbalanced across participants.</p>
<p>After the synchronization part of the experiment was completed, behavioral valence judgments of consonance and dissonance sounds (pleasantness/unpleasantness) were measured using a rating scale paradigm. The four stimuli used in the experiment (perfect fourth, perfect fifth, major seventh and minor second) were presented to each participant at an intensity of 68 dB through headphones for 4 s. After the presentation of each sound, individuals were asked to rate the valence/pleasantness of each stimulus on a 5-point rating scales where “1” indicated very unpleasant and “5” indicated very pleasant.</p>
</sec>
<sec id="Sec6">
<title>Data analysis</title>
<p>Temporal control of movement was analyzed by examining both the timing and movement trajectory formation (absolute synchronization errors, spread of error, movement harmonicity and tau-guide coupling). Using MATLAB, positional data were filtered using an eight-order low-pass Butterworth filter with a cutoff frequency of 20 Hz (The Mathworks Inc. 2011). The velocity profile was calculated using the first derivative of the smoothed positional data. Synchronization was determined as being the point when the finger stopped moving. The moment representing the end of the finger movement was taken as the first sample that dropped below 5 % of peak velocity for that particular interceptive movement to the target zone. Descriptions of the calculations for each measure are given below.</p>
<sec id="Sec7">
<title>Absolute synchronization errors</title>
<p>Absolute synchronization errors between a participant’s finger movements and the auditory guides were measured for each movement as an absolute difference between the time of auditory stimulus onset and the time when the finger stopped in the target zone. The beats sounded for the same duration as the inter-stimulus interval (0.6 s) with a decreasing amplitude envelope. Synchronization was assumed to be possible, as the beats (chords) had a clear amplitude onset, which has been shown in previous studies to perceptually demarcate the beginning of an auditory event (Phillips et al.
<xref ref-type="bibr" rid="CR33">2002</xref>
).</p>
</sec>
<sec id="Sec8">
<title>Spread of error</title>
<p>The variability of the synchronization error between consecutive movements for each trial was measured using the spread of error calculation described by Bieńkiewicz et al. (
<xref ref-type="bibr" rid="CR4">2012</xref>
). It was measured as the absolute difference between the synchronization errors (with respect to beat onset) made in consecutive movements.
<disp-formula id="Equ1">
<label>1</label>
<alternatives>
<tex-math id="M1">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{SpE}} = \frac{{\mathop \sum \nolimits_{i = 1}^{N} |T_{i + 1} - T_{i} |}}{N - 1}$$\end{document}</tex-math>
<mml:math id="M2" display="block">
<mml:mrow>
<mml:mtext>SpE</mml:mtext>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mo stretchy="false">|</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo stretchy="false">|</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
<graphic xlink:href="221_2015_4233_Article_Equ1.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
where SpE is average spread error, T is temporal error (the difference in time between the onset of the auditory stimulus and the moment the finger stopped in the target zone) and N is the overall number of trials.</p>
</sec>
<sec id="Sec9">
<title>Movement harmonicity</title>
<p>The harmonicity of the movement (a measure of how sinusoidal the dynamics of individual movements are) was calculated by the formula used in Rodger and Craig (
<xref ref-type="bibr" rid="CR35">2011</xref>
). This was calculated by normalizing the absolute velocity profile for each movement so that it fell between 0 and 1 and then interpolating to give 101 data points. The index of circularity was measured by calculating the root mean square error (RMSE) between the normalized velocity–displacement profile and a semicircle, and it was subtracted from 1 (1-RMSE). A semicircle consists of 101 points given by
<disp-formula id="Equa">
<alternatives>
<tex-math id="M3">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f\left( x \right) = \, 2 \times \surd \left( {x \times \left( {1 - x} \right)} \right),\quad {\text{where }}x \, = \left\{ {0,0.01,0.02, \ldots ,1} \right\}$$\end{document}</tex-math>
<mml:math id="M4" display="block">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mfenced close=")" open="(">
<mml:mi>x</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mspace width="0.166667em"></mml:mspace>
<mml:mn>2</mml:mn>
<mml:mo>×</mml:mo>
<mml:mi></mml:mi>
<mml:mfenced close=")" open="(" separators="">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>×</mml:mo>
<mml:mfenced close=")" open="(" separators="">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"></mml:mspace>
<mml:mrow>
<mml:mtext>where</mml:mtext>
<mml:mspace width="0.333333em"></mml:mspace>
</mml:mrow>
<mml:mi>x</mml:mi>
<mml:mspace width="0.166667em"></mml:mspace>
<mml:mo>=</mml:mo>
<mml:mfenced close="}" open="{" separators="">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>0.01</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>0.02</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo></mml:mo>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
<graphic xlink:href="221_2015_4233_Article_Equa.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
</p>
</sec>
<sec id="Sec10">
<title>Tau-guide coupling</title>
<p>Finally, we tested a model derived from tau-coupling theory (Craig et al.
<xref ref-type="bibr" rid="CR9">2005</xref>
). According to this theory, in order to synchronize movements with auditory beats, one would need to couple the temporal control of movement, or tau of the motion gap
<italic>x</italic>
, (
<italic>τ</italic>
<sub>
<italic>X</italic>
(
<italic>t</italic>
)</sub>
) onto an internal tau-guide that specifies the time-to-sounding of the next beat (
<italic>τ</italic>
<sub>
<italic>g</italic>
(
<italic>t</italic>
)</sub>
) at a constant ratio (
<italic>k</italic>
) so that
<disp-formula id="Equ2">
<label>2</label>
<alternatives>
<tex-math id="M5">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau_{X(t)} = k\tau_{g(t)}$$\end{document}</tex-math>
<mml:math id="M6" display="block">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="italic">τ</mml:mi>
<mml:mrow>
<mml:mi>X</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>k</mml:mi>
<mml:msub>
<mml:mi mathvariant="italic">τ</mml:mi>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<graphic xlink:href="221_2015_4233_Article_Equ2.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
</p>
<p>The time to closure of a motion gap, tau
<italic>x</italic>
(
<italic>τ</italic>
<sub>
<italic>X</italic>
(
<italic>t</italic>
)</sub>
), specifies the way the movement changes over time and is defined as the ratio between the magnitude of the action displacement gap and its current rate of closure:
<italic>X</italic>
(displacement)/
<italic></italic>
(velocity). The intrinsic tau-guide,
<italic>τ</italic>
<sub>
<italic>g</italic>
(
<italic>t</italic>
)</sub>
, is derived from Newton’s equations of motion and represents the time to gap closure of a virtual object moving under constant acceleration (Lee
<xref ref-type="bibr" rid="CR25">1998</xref>
),
<disp-formula id="Equ3">
<label>3</label>
<alternatives>
<tex-math id="M7">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau_{g(t)} = \frac{1}{2} \times \left(t - \frac{{T^{2} }}{t}\right)$$\end{document}</tex-math>
<mml:math id="M8" display="block">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="italic">τ</mml:mi>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mo>×</mml:mo>
<mml:mfenced close=")" open="(" separators="">
<mml:mi>t</mml:mi>
<mml:mo>-</mml:mo>
<mml:mfrac>
<mml:msup>
<mml:mi>T</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>t</mml:mi>
</mml:mfrac>
</mml:mfenced>
</mml:mrow>
</mml:math>
<graphic xlink:href="221_2015_4233_Article_Equ3.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
with
<italic>T</italic>
being equal to the inter-beat interval (0.6 s) and
<italic>t</italic>
the evolving time series within the inter-beat interval. The value
<italic>k</italic>
is the coupling constant that captures the dynamics of gap closure with different
<italic>k</italic>
values corresponding to different velocity profiles (Craig et al.
<xref ref-type="bibr" rid="CR9">2005</xref>
). In order to find the strength of coupling, the tau of the movement was linearly regressed against the hypothetical tau-G guide and the strength of the coupling was calculated by the
<italic>r</italic>
-squared values of the regression analysis, with higher r-squared values indicating a stronger coupling (see Fig. 
<xref rid="Fig3" ref-type="fig">3</xref>
).</p>
</sec>
</sec>
<sec id="Sec11">
<title>Statistical analysis</title>
<sec id="Sec12">
<title>Kinematic data</title>
<p>Two-way repeated-measure ANOVAs [2 sounds (
<italic>consonant</italic>
and
<italic>dissonant</italic>
) × 2
<italic>task phase</italic>
(
<italic>synchronization</italic>
and
<italic>continuation</italic>
)] were carried out on each of the five different variables. Post hoc comparisons were performed by means of
<italic>t</italic>
tests applying a Bonferroni correction for multiple comparisons when required. A partial eta-squared statistic served as the effect size estimate.
<fig id="Fig3">
<label>Fig. 3</label>
<caption>
<p>Examples of tau coupling between the tau of the movement gap and the intrinsic tau-guide. The
<italic>R</italic>
<sup>2</sup>
values displayed in the top left corner are the linear regression coefficients and
<italic>k</italic>
values are the coupling constants</p>
</caption>
<graphic xlink:href="221_2015_4233_Fig3_HTML" id="MO7"></graphic>
</fig>
</p>
</sec>
<sec id="Sec13">
<title>Behavioral data</title>
<p>A paired sample
<italic>t</italic>
test was used to examine the difference between the mean rating of pleasantness for consonant and dissonant sounds. A Cohen’s d statistic was also used as an effect size estimate.</p>
</sec>
</sec>
</sec>
<sec id="Sec14" sec-type="results">
<title>Results</title>
<sec id="Sec15">
<title>Behavioral valence ratings of consonance and dissonance</title>
<p>The average behavioral valence ratings for pleasantness for the four stimuli were found to be higher for the consonant (4.30 ± 0.23 for perfect fifth, 3.61 ± 0.26 for perfect fourth) compared with dissonant sounds (2.69 ± 0.22 for major seventh and 1.92 ± 0.22 for minor second). This ordering of consonance observed here is consistent with previous reports of pleasantness ratings of musical intervals (e.g., Bidelman and Krishnan
<xref ref-type="bibr" rid="CR2">2009</xref>
,
<xref ref-type="bibr" rid="CR3">2011</xref>
; Bidelman and Heinz
<xref ref-type="bibr" rid="CR1">2011</xref>
; Schwartz et al.
<xref ref-type="bibr" rid="CR40">2003</xref>
). A paired
<italic>t</italic>
test showed that this difference in perceived pleasantness between the consonant sounds and dissonant sounds was significant (
<italic>t</italic>
<sub>(12)</sub>
 = 5.133,
<italic>p</italic>
 < 0.001, Cohen’s d = 22.09).</p>
</sec>
<sec id="Sec16">
<title>Kinematic data</title>
<sec id="Sec17">
<title>Absolute Synchronization Error</title>
<p>We found a significant main effect for
<italic>sounds</italic>
(
<italic>F</italic>
<sub>1, 12</sub>
 = 23.397,
<italic>p</italic>
 < 0.001,
<italic>η</italic>
<sup>2</sup>
 = 0.661) with the absolute synchronization errors for dissonant sounds being significantly larger when compared to consonant sounds. This indicates that performance at matching the specified timing was superior for the consonant compared with the dissonant metronome. Moreover, we found a significant main effect for
<italic>task phase</italic>
(
<italic>F</italic>
<sub>1, 12</sub>
 = 6.037,
<italic>p</italic>
 = 0.03,
<italic>η</italic>
<sup>2</sup>
 = 0.335), where again the absolute synchronization errors were significantly larger for the continuation compared with synchronization movements. The interaction between
<italic>sounds</italic>
and
<italic>task</italic>
<italic>phase</italic>
was also significant (
<italic>F</italic>
<sub>1, 12</sub>
 = 15.716,
<italic>p</italic>
 = 0.002,
<italic>η</italic>
<sup>2</sup>
 = 0.567). The
<italic>t</italic>
test revealed that for the dissonant sounds the absolute synchronization errors were greater during the continuation conditions compared with the synchronization conditions (
<italic>p</italic>
 = 0.007) with errors in the continuation dissonant condition being greater than the consonant one (
<italic>p</italic>
 < 0.001) (see Fig. 
<xref rid="Fig4" ref-type="fig">4</xref>
).
<fig id="Fig4">
<label>Fig. 4</label>
<caption>
<p>Absolute synchronization error means averaged across all 13 participants for both sound conditions (consonant and dissonant) in the two different stimuli presentation conditions (synchronization and continuation).
<italic>Error bars</italic>
denote standard errors. Significant comparisons between conditions are highlighted using an
<italic>asterisk</italic>
(*
<italic>p</italic>
 < 0.05)</p>
</caption>
<graphic xlink:href="221_2015_4233_Fig4_HTML" id="MO8"></graphic>
</fig>
</p>
</sec>
<sec id="Sec18">
<title>Spread of error</title>
<p>An analysis of the spread of errors showed a significant main effect for
<italic>sounds</italic>
(
<italic>F</italic>
<sub>1, 12</sub>
 = 43.441,
<italic>p</italic>
 < 0.001,
<italic>η</italic>
<sup>2</sup>
 = 0.784). The timing variability, as measured by the spread of errors, was significantly greater for dissonant compared with consonant sounds. A significant main effect of
<italic>task phase</italic>
was also found (
<italic>F</italic>
<sub>1, 12</sub>
 = 10.503,
<italic>p</italic>
 = 0.007,
<italic>η</italic>
<sup>2</sup>
 = 0.467) where the spread of error was significantly larger for continuation compared with synchronization phases.</p>
<p>The interaction between
<italic>sounds</italic>
and
<italic>task phase</italic>
was also significant (
<italic>F</italic>
<sub>1, 12</sub>
 = 85.452,
<italic>p</italic>
 < 0.001,
<italic>η</italic>
<sup>2</sup>
 = 0.877). The
<italic>t</italic>
test revealed that for consonant and also dissonant intervals the spread of error was significantly larger during the continuation compared with the synchronization conditions (
<italic>p</italic>
 < 0.001). During the continuation movements, the spread of error was significantly greater for dissonant compared with consonant sounds (
<italic>p</italic>
 < 0.001) (see Fig. 
<xref rid="Fig5" ref-type="fig">5</xref>
).
<fig id="Fig5">
<label>Fig. 5</label>
<caption>
<p>Spread of error averaged across all 13 participants for both consonant and dissonant conditions in the two different stimuli presentation conditions (synchronization and continuation).
<italic>Error bars</italic>
denote standard errors. Significant comparisons between conditions are highlighted with an
<italic>asterisk</italic>
(*
<italic>p</italic>
 < 0.05)</p>
</caption>
<graphic xlink:href="221_2015_4233_Fig5_HTML" id="MO9"></graphic>
</fig>
</p>
</sec>
<sec id="Sec19">
<title>Circularity index</title>
<p>To understand whether the synchronization movements with consonant and dissonant intervals gave rise to different movement trajectory forms, we carried out an analysis on movement harmonicity. Movement harmonicity can be quantified through a circularity index, which is the RMSE between the normalized velocity profile and the perfect harmonic (sinusoidal) motion (semicircle with blue dots in Fig. 
<xref rid="Fig7" ref-type="fig">7</xref>
b) and then subtracted from one. Therefore, a perfect circular motion yields a circularity index of one. Discrepancies in the degree of harmonicity for different conditions (consonant/dissonant) would reveal that the dynamics underlying the movement are influenced by the structure of the sound stimuli.</p>
<p>The results revealed a significant main effect for
<italic>sound</italic>
(
<italic>F</italic>
<sub>1, 12</sub>
 = 9.419,
<italic>p</italic>
 = 0.01,
<italic>η</italic>
<sup>2</sup>
 = 0.44). Furthermore, it was noted that the dynamics of the movements to consonant sounds were significantly more harmonic (larger circularity index) than movements to dissonant sounds. The main effect of
<italic>task</italic>
phase was also significant (
<italic>F</italic>
<sub>1, 12</sub>
 = 5.433,
<italic>p</italic>
 = 0.038,
<italic>η</italic>
<sup>2</sup>
 = 0.312) with movements being found to be more circular during the synchronization compared with the continuation phases. The interaction between
<italic>sounds</italic>
and
<italic>task phase</italic>
was also found to be significant (
<italic>F</italic>
<sub>1, 12</sub>
 = 10.392,
<italic>p</italic>
 = 0.007,
<italic>η</italic>
<sup>2</sup>
 = 0.464). The
<italic>t</italic>
test revealed that for the dissonant intervals the harmonicity of movement was significantly greater (larger circularity index) during the synchronization compared with the continuation phases (
<italic>p</italic>
 = 0.015). Moreover, when moving in the continuation phase to the memory of the metronome, the harmonicity of movements was found to be greater (the circularity index was larger) with consonant compared with dissonant sounds (
<italic>p</italic>
 = 0.008) (see Figs. 
<xref rid="Fig6" ref-type="fig">6</xref>
,
<xref rid="Fig7" ref-type="fig">7</xref>
).
<fig id="Fig6">
<label>Fig. 6</label>
<caption>
<p>Circularity index averaged across all 13 participants for both consonant and dissonant conditions in the two different stimuli presentation conditions (synchronization and continuation).
<italic>Error bars</italic>
denote standard errors. Significant comparisons between conditions are highlighted with an
<italic>asterisk</italic>
(*
<italic>p</italic>
 < 0.05)</p>
</caption>
<graphic xlink:href="221_2015_4233_Fig6_HTML" id="MO10"></graphic>
</fig>
<fig id="Fig7">
<label>Fig. 7</label>
<caption>
<p>
<bold>a</bold>
Average circularity index for all 13 participants for both consonant and dissonant conditions during the continuation phase.
<bold>b</bold>
Averaged data from two participants (6 and 8) when moving with consonant and dissonant metronomes during the continuation phase. The averaged normalized velocity profile plotted against normalized displacement, and shaded regions around the velocity profiles represent
<italic>error bars</italic>
(SEM). For subject number 6, movements are more circular in form for consonant (
<italic>red dots</italic>
) than dissonant intervals (
<italic>black dots</italic>
), while subject number 8 showed a similar pattern of movement circularity for both intervals. The
<italic>blue dots</italic>
indicate the velocity profile of a perfect sinusoidal movement (color figure online)</p>
</caption>
<graphic xlink:href="221_2015_4233_Fig7_HTML" id="MO11"></graphic>
</fig>
</p>
</sec>
<sec id="Sec20">
<title>Tau-G coupling</title>
<p>To understand how the type of information presented through the stimuli might be affecting the subsequent movement, we carried out an information–movement analysis using the tau-coupling model. The intrinsic tau-guide is a mathematical description of how the time to the next beat could be represented by neural structures (Craig et al.
<xref ref-type="bibr" rid="CR9">2005</xref>
). The form of the guide is prospective in nature allowing for the regulation of action. This part of the analysis allows us to see whether the type of information presented (consonant/dissonant) affects the neural representation of the time between beats and the subsequent resonance of that interval.</p>
<p>To test this, we examined the extent of the coupling between the movement and the information (the intrinsic tau-guide) (see Fig. 
<xref rid="Fig3" ref-type="fig">3</xref>
). We measured the strength of coupling (
<italic>r</italic>
-squared values) when the tau of the movement was plotted against the intrinsic tau-guide (the neural information specifying the time to the next beat).
<italic>R</italic>
<sup>2</sup>
values from the tau-coupling regression analysis were calculated for each movement and then averaged for each condition and participant. A significant main effect for
<italic>sound</italic>
was found (
<italic>F</italic>
<sub>1, 12</sub>
 = 7.666,
<italic>p</italic>
= 0.017,
<italic>η</italic>
<sup>2</sup>
 = 0.39) with the
<italic>r</italic>
-squared values for the tau coupling being significantly greater for the consonant sounds compared with the dissonant sounds. In addition, we found a significant main effect for
<italic>task phase</italic>
(
<italic>F</italic>
<sub>1, 12</sub>
 = 8.151,
<italic>p</italic>
 = 0.014,
<italic>η</italic>
<sup>2</sup>
 = 0.404) with tau-coupling
<italic>r</italic>
-squared values being significantly higher during the synchronization compared with the continuation phases. Moreover, the interaction between
<italic>sounds</italic>
and
<italic>task phase</italic>
was also significant (
<italic>F</italic>
<sub>1, 12</sub>
 = 9.151,
<italic>p</italic>
 = 0.011,
<italic>η</italic>
<sup>2</sup>
 = 0.433). The
<italic>t</italic>
test revealed that for the dissonant intervals the degree of tau-G coupling was significantly stronger during the synchronization condition as compared to the continuation phase (
<italic>p</italic>
 = 0.005). Moreover, during the continuation phase, the extent of the coupling between the movements and the guide was significantly greater for consonant compared with dissonant sounds (
<italic>p</italic>
 = 0.013) (see Fig. 
<xref rid="Fig8" ref-type="fig">8</xref>
).
<fig id="Fig8">
<label>Fig. 8</label>
<caption>
<p>
<italic>R</italic>
<sup>2</sup>
values from the tau-coupling regression analysis were averaged across all 13 participants for both consonant and dissonant conditions in the two different stimuli presentation conditions (synchronization and continuation).
<italic>Error bars</italic>
denote standard errors. Significant comparisons between conditions are indicated with an
<italic>asterisk</italic>
(*
<italic>p</italic>
 < 0.05)</p>
</caption>
<graphic xlink:href="221_2015_4233_Fig8_HTML" id="MO12"></graphic>
</fig>
</p>
</sec>
</sec>
</sec>
<sec id="Sec21">
<title>Discussion
<xref ref-type="fn" rid="Fn1">1</xref>
</title>
<p>In this study, we showed that the degree of consonance of the sound presented influenced the types of movement produced after the sound stimulus was removed and the participant continued moving between the two target zones at the same tempo, despite the absence of a metronome. The movement performance measured after exposure to a consonant as compared to a dissonant metronome was found to be less variable and more precise, with a higher percentage of information/movement coupling (tau coupling) and a higher degree of movement circularity (indicating a smoother oscillatory motion). This result suggests that the internal neural resonance of the sound just heard is more accurate when the sound is consonant than when it is dissonant, resulting in better guidance of the movement, which gives rise to more stable movement patterns. If this is the case, then an internal clock model such as the Wing and Kristofferson model (
<xref ref-type="bibr" rid="CR51">1973</xref>
) should also consider the multiple aspects present in the structure of auditory cues (e.g., consonant/dissonant pitch intervals). It is worth noting that, in the synchronization phase, when participants were moving under the continual guidance of a metronome, no difference between consonant or dissonant sounds was present either for accuracy or for variability. These results suggest that the continual metronome beat leads to the production of a metric pattern of movement that is independent of the harmonic content of the sounds.</p>
<p>The consonant and dissonant intervals also had an effect on movement harmonicity, with consonant intervals resulting in more sinusoidal movements compared with dissonant ones, with this again being more evident during the continuation phase. Rodger and Craig (
<xref ref-type="bibr" rid="CR35">2011</xref>
) showed already that the dynamics of synchronizing movements with continuous sounds were more circular when compared to discrete sounds. Here, our results reinforce the idea that the degree of consonance of sounds influences the shape of oscillatory movements between target zones even during un-paced movement. This result highlights how the level of consonance of the inter-beat intervals plays an important role in governing the pattern of movement even when the auditory guide is no longer present. This suggests that when moving with consonant and dissonant time intervals, the neural structures representing the demarcation of time resonate internally in different ways.</p>
<p>By testing the tau-coupling theory, we found that presenting dissonant intervals leads to a marked decline in the percentage of information/movement coupling. According to Craig et al. (
<xref ref-type="bibr" rid="CR9">2005</xref>
), when movements need to be synchronized with acoustic beats, the sensorimotor control of this process involves coupling the tau of the movement (the time to closure of the spatial gap at its current closure rate) onto a tau-guide (a dynamic temporal imprint of the inter-beat interval generated in the brain that continually specifies the time remaining until the next beat will sound). Based on this idea, the dynamic temporal imprint produced when listening to consonant intervals leads to a more robust temporal representation of that time interval. Having a more robust guide would allow for better action control and lead to better synchronization compared with dissonant beats. Craig et al. (
<xref ref-type="bibr" rid="CR9">2005</xref>
) also demonstrated that at certain inter-beat intervals (2.5/3 s) there was a decline in the proportion of coupling between prospective information (tau-guide) and hand movements, which resulted in a significant reduction in interceptive performance. Here, we showed that in addition to temporal information specifying the time gap between auditory beats, the context of the auditory information (i.e., the level of consonance of the intervals) also provides information that can enhance the synchronization of movement.</p>
<p>So why does the level of consonance of musical intervals invite different movement strategies during continuation and synchronization tasks? Firstly, it is important to recall that the differences found for consonant over dissonant sounds were particularly emphasized during the continuation phase, suggesting that the quality of a sound will affect the structure of the internal dynamic temporal imprint that guides action when external stimuli are absent. A possible explanation is that during the synchronization task, the stimuli duration can be repeatedly encoded when the metronome is present, allowing for a more precise reproduction of that interval duration. On the other hand, during the continuation phase, participants need to represent and reproduce the metrical pattern from memory. We hypothesized that this is due to different emotional states evoked by the sounds (as shown by the behavioral result), which in turn affects the types of movement produced when external stimuli are absent and subjects continued to move at the same rate from memory (continuation phase). Moreover, it might be due to the diverse feelings of “tension” and “resolution” in dissonant and consonant musical intervals. The concept is well known in music theory: Dissonant intervals increase tension and often lead to a resolution to consonant intervals, which change the primary sensation of tension to a more stable feeling (for a review see Koelsch
<xref ref-type="bibr" rid="CR21">2014</xref>
; Lehne et al.
<xref ref-type="bibr" rid="CR26">2013</xref>
,
<xref ref-type="bibr" rid="CR27">2014</xref>
; Farbood
<xref ref-type="bibr" rid="CR12">2012</xref>
; Sorce
<xref ref-type="bibr" rid="CR42">1995</xref>
). Thus, moving under unresolved (incomplete) auditory events could lead to relatively poor timing performance during the continuation phase. Another reason might be that the perception of the duration of the inter-beat interval evoked by the auditory events may be different (i.e., a disruption of the perception of time is caused by the unpleasant beating in dissonant sounds). Interestingly, it has been shown that emotional valence of music modulates time perception (Droit-Volet et al.
<xref ref-type="bibr" rid="CR11">2013</xref>
). However, further experiments must be carried out to gain a better understanding of the effect of consonance and dissonance intervals on time perception. Either way, we show that the type of sound appears to affect the sensorimotor response, even though the interval duration remains the same.</p>
<p>The hierarchical rating of consonance (i.e., “pleasantness”) and their parallel usage in music composition (Krumhansl
<xref ref-type="bibr" rid="CR24">1990</xref>
) might explain why the degrees of musical tonality affect movement time and trajectory differently in a sensorimotor continuation task. Neuroimaging studies have revealed robust differences in the processing of musical intervals at both cortical (e.g., premotor cortex: Minati et al.
<xref ref-type="bibr" rid="CR32">2009</xref>
) and subcortical levels (e.g., brainstem: Bidelman and Krishnan
<xref ref-type="bibr" rid="CR2">2009</xref>
,
<xref ref-type="bibr" rid="CR3">2011</xref>
), which would imply the involvement of networks involved in both sensory and cognitive processing. A recent review paper has extensively discussed the effects of consonant/dissonant sounds on motor processes in the brain (Koelsch
<xref ref-type="bibr" rid="CR21">2014</xref>
). Moreover, it has been suggested that the preferential encoding of consonant pitch intervals might be rooted in a more robust and coherent neuronal synchronization when compared to dissonant pitch intervals (Tramo et al.
<xref ref-type="bibr" rid="CR49">2001</xref>
; McKinney et al.
<xref ref-type="bibr" rid="CR31">2001</xref>
; Fishman et al.
<xref ref-type="bibr" rid="CR13">2001</xref>
). Importantly, Tierney and Kraus (
<xref ref-type="bibr" rid="CR46">2013</xref>
) provided evidence for a link between the ability to synchronize movements to an auditory beat and the consistency of auditory brainstem timing. Thus, a more robust and synchronous phase-locking response in the brainstem when presented with consonant rather than dissonant pitch intervals (Bidelman and Krishnan
<xref ref-type="bibr" rid="CR2">2009</xref>
,
<xref ref-type="bibr" rid="CR3">2011</xref>
) could explain the higher degree of consistency found in this study when subjects synchronize movements to consonant stimuli.</p>
<p>Further evidence suggests that both the cerebellum and the basal ganglia are the cornerstone of an internal timing system (Ivry
<xref ref-type="bibr" rid="CR19">1997</xref>
; Diedrichsen et al.
<xref ref-type="bibr" rid="CR10">2003</xref>
). Recently, Claassen et al. (
<xref ref-type="bibr" rid="CR8">2013</xref>
) tested cerebellar disorders (CD) and PD, using a synchronization–continuation paradigm, to decipher the role of the cerebellum and basal ganglia in motor timing. They found that CD participants were less accurate than PD patients during the continuation phase, suggesting a specialized role for the cerebellum in internal timing (Claassen et al.
<xref ref-type="bibr" rid="CR8">2013</xref>
). Hence, it is possible to speculate that consonant pitch intervals may activate the cerebellum more than dissonant ones, and this may account for the better and more precise clocking of fine movements. For a better understanding of this mechanism, it would be interesting to investigate how the sensorimotor system in cooperation with the auditory system extracts relevant information embedded in the musical pitch intervals to control movements in a synchronization–continuation task.</p>
<p>By knowing better why consonant musical pitch intervals can benefit the synchronization of movement compared with their dissonant counterparts, we might be able to use them as auditory guides to improve movement performance in patients with sensory–motor deficits, such as in PD (Rodger et al.
<xref ref-type="bibr" rid="CR37">2013</xref>
). It has been shown that acoustic guides for movement are beneficial in reducing spatial and temporal gait variability in PD patients (Young et al.
<xref ref-type="bibr" rid="CR54">2014</xref>
; Bieńkiewicz et al.
<xref ref-type="bibr" rid="CR5">2014</xref>
; Young et al.
<xref ref-type="bibr" rid="CR53">2013</xref>
). Moreover, the notion that different musical chords evoke different emotions, which in turn can potentially drive the generation of different movement patterns, might be applied to the models of affective engagements with music involving body movement and dance. Further experimental exploration on the relationship between sensorimotor coupling with music and emotion might shed light on why some dances are set to certain kinds of music. It should be noted that the present experiment assessed the perceptual motor ability in a normal population and will be used in the future as a model for testing expert musicians. A tentative hypothesis can be advanced where one might expect that expert musicians will not differ in their performance when synchronizing their movement to consonant and dissonant sound intervals. This putative result would add to our knowledge of the perceptual–motor changes that result from learning a musical instrument.</p>
</sec>
<sec id="Sec22" sec-type="conclusions">
<title>Conclusions</title>
<p>In the present study, we tested the effects of musical consonance/dissonance on sensorimotor timing in a synchronization–continuation paradigm during which participants performed reciprocal aiming movements. Remarkably, the analysis of the participants’ movement in the continuation phase revealed that after listening to consonant as opposed to dissonant intervals smaller absolute synchronization errors and spread of errors were found. Furthermore, a higher percentage of movement was tau-coupled and a higher degree of movement circularity was also found. It might be argued that musical pitch combinations caused alterations in perceived tempo during the synchronization phase that, in turn, resulted in a different regulation of motor commands during the continuation phase. Overall, it was found that the harmonic aspects of the musical structure systematically affected both the movement form and timing. We believe that this research yields new insights into the nature of the innate bias that makes consonance perceptually more attractive than dissonance.</p>
</sec>
</body>
<back>
<fn-group>
<fn id="Fn1">
<label>1</label>
<p>Please note that the significant main effects found in the above statistics are not meaningful in light of the significant interactions. For example, while the main effect of sound (
<italic>consonant</italic>
and
<italic>dissonant</italic>
) and task phase (
<italic>synchronization</italic>
and
<italic>continuation</italic>
) were found to be significant, the interactions between them indicate where these differences are coming from. This is why we mainly focus our discussion around the significant interactions.</p>
</fn>
</fn-group>
<ack>
<p>This study was partly supported by an ERC Starting Grant (ERC 210007) TEMPUS_G. The authors wish to thank Stefan Koelsch and one anonymous reviewer for their valuable comments and suggestions.</p>
</ack>
<ref-list id="Bib1">
<title>References</title>
<ref id="CR1">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bidelman</surname>
<given-names>GM</given-names>
</name>
<name>
<surname>Heinz</surname>
<given-names>MG</given-names>
</name>
</person-group>
<article-title>Auditory-nerve responses predict pitch attributes related to musical consonance-dissonance for normal and impaired hearing</article-title>
<source>J Acoust Soc Am</source>
<year>2011</year>
<volume>130</volume>
<fpage>1488</fpage>
<lpage>1502</lpage>
<pub-id pub-id-type="doi">10.1121/1.3605559</pub-id>
<pub-id pub-id-type="pmid">21895089</pub-id>
</element-citation>
</ref>
<ref id="CR2">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bidelman</surname>
<given-names>GM</given-names>
</name>
<name>
<surname>Krishnan</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Neural correlates of consonance, dissonance, and the hierarchy of musical pitch in the human brainstem</article-title>
<source>J Neurosci</source>
<year>2009</year>
<volume>29</volume>
<fpage>13165</fpage>
<lpage>13171</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3900-09.2009</pub-id>
<pub-id pub-id-type="pmid">19846704</pub-id>
</element-citation>
</ref>
<ref id="CR3">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bidelman</surname>
<given-names>GM</given-names>
</name>
<name>
<surname>Krishnan</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Brainstem correlates of behavioral and compositional preferences of musical harmony</article-title>
<source>Neuroreport</source>
<year>2011</year>
<volume>22</volume>
<fpage>212</fpage>
<lpage>216</lpage>
<pub-id pub-id-type="doi">10.1097/WNR.0b013e328344a689</pub-id>
<pub-id pub-id-type="pmid">21358554</pub-id>
</element-citation>
</ref>
<ref id="CR4">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bieńkiewicz</surname>
<given-names>MMN</given-names>
</name>
<name>
<surname>Rodger</surname>
<given-names>MWM</given-names>
</name>
<name>
<surname>Craig</surname>
<given-names>CM</given-names>
</name>
</person-group>
<article-title>Timekeeping strategies operate independently from spatial and accuracy demands in beat-interception movements</article-title>
<source>Exp Brain Res</source>
<year>2012</year>
<volume>222</volume>
<fpage>241</fpage>
<lpage>253</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-012-3211-8</pub-id>
<pub-id pub-id-type="pmid">22903462</pub-id>
</element-citation>
</ref>
<ref id="CR5">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bieńkiewicz</surname>
<given-names>MMN</given-names>
</name>
<name>
<surname>Young</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Craig</surname>
<given-names>CM</given-names>
</name>
</person-group>
<article-title>Balls to the wall: how acoustic information from a ball in motion guides interceptive movement in people with Parkinson’s disease</article-title>
<source>Neuroscience</source>
<year>2014</year>
<volume>275</volume>
<fpage>508</fpage>
<lpage>518</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroscience.2014.06.050</pub-id>
<pub-id pub-id-type="pmid">24995419</pub-id>
</element-citation>
</ref>
<ref id="CR6">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brattico</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Näätänen</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
</person-group>
<article-title>Musical scale properties are automatically processed in the human auditory cortex</article-title>
<source>Brain Res</source>
<year>2006</year>
<volume>1117</volume>
<fpage>162</fpage>
<lpage>174</lpage>
<pub-id pub-id-type="doi">10.1016/j.brainres.2006.08.023</pub-id>
<pub-id pub-id-type="pmid">16963000</pub-id>
</element-citation>
</ref>
<ref id="CR7">
<mixed-citation publication-type="other">Carello C, Wagman J, Turvey M (2005) Acoustic specification of object properties. Mov image theory Ecol consid. Southern Illinois University Press, pp 79–104</mixed-citation>
</ref>
<ref id="CR8">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Claassen</surname>
<given-names>DO</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>CRG</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>M</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Deciphering the impact of cerebellar and basal ganglia dysfunction in accuracy and variability of motor timing</article-title>
<source>Neuropsychologia</source>
<year>2013</year>
<volume>51</volume>
<fpage>267</fpage>
<lpage>274</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2012.09.018</pub-id>
<pub-id pub-id-type="pmid">23084982</pub-id>
</element-citation>
</ref>
<ref id="CR9">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Craig</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Pepping</surname>
<given-names>GJ</given-names>
</name>
<name>
<surname>Grealy</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Intercepting beats in predesignated target zones</article-title>
<source>Exp Brain Res</source>
<year>2005</year>
<volume>165</volume>
<fpage>490</fpage>
<lpage>504</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-005-2322-x</pub-id>
<pub-id pub-id-type="pmid">15912367</pub-id>
</element-citation>
</ref>
<ref id="CR10">
<mixed-citation publication-type="other">Diedrichsen J, Ivry R, Pressing J (2003) Functional and Neural Mechanisms of Interval Timing. Meck, WH, Funct neural Mech interval timing 19:457–483. doi:10.1201/9780203009574</mixed-citation>
</ref>
<ref id="CR11">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Droit-Volet</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ramos</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Bueno</surname>
<given-names>JLO</given-names>
</name>
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Music, emotion, and time perception: The influence of subjective emotional valence and arousal?</article-title>
<source>Front Psychol</source>
<year>2013</year>
<volume>4</volume>
<fpage>417</fpage>
<pub-id pub-id-type="doi">10.3389/fpsyg.2013.00417</pub-id>
<pub-id pub-id-type="pmid">23882233</pub-id>
</element-citation>
</ref>
<ref id="CR12">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Farbood</surname>
<given-names>MM</given-names>
</name>
</person-group>
<article-title>A parametric, temporal model of musical tension</article-title>
<source>Music Percept Interdiscip J</source>
<year>2012</year>
<volume>29</volume>
<fpage>387</fpage>
<lpage>428</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2012.29.4.387</pub-id>
</element-citation>
</ref>
<ref id="CR13">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fishman</surname>
<given-names>YI</given-names>
</name>
<name>
<surname>Volkov</surname>
<given-names>IO</given-names>
</name>
<name>
<surname>Noh</surname>
<given-names>MD</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Consonance and dissonance of musical chords: neural correlates in auditory cortex of monkeys and humans</article-title>
<source>J Neurophysiol</source>
<year>2001</year>
<volume>86</volume>
<fpage>2761</fpage>
<lpage>2788</lpage>
<pub-id pub-id-type="pmid">11731536</pub-id>
</element-citation>
</ref>
<ref id="CR14">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Foss</surname>
<given-names>AH</given-names>
</name>
<name>
<surname>Altschuler</surname>
<given-names>EL</given-names>
</name>
<name>
<surname>James</surname>
<given-names>KH</given-names>
</name>
</person-group>
<article-title>Neural correlates of the pythagorean ratio rules</article-title>
<source>Neuroreport</source>
<year>2007</year>
<volume>18</volume>
<fpage>1521</fpage>
<lpage>1525</lpage>
<pub-id pub-id-type="doi">10.1097/WNR.0b013e3282ef6b51</pub-id>
<pub-id pub-id-type="pmid">17885594</pub-id>
</element-citation>
</ref>
<ref id="CR15">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fritz</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Jentschke</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Gosselin</surname>
<given-names>N</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Universal recognition of three basic emotions in music</article-title>
<source>Curr Biol</source>
<year>2009</year>
<volume>19</volume>
<fpage>573</fpage>
<lpage>576</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2009.02.058</pub-id>
<pub-id pub-id-type="pmid">19303300</pub-id>
</element-citation>
</ref>
<ref id="CR16">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gaver</surname>
<given-names>WW</given-names>
</name>
</person-group>
<article-title>What in the world do we hear? An ecological approach to auditory event perception</article-title>
<source>Ecol Psychol</source>
<year>1993</year>
<volume>5</volume>
<fpage>1</fpage>
<lpage>29</lpage>
<pub-id pub-id-type="doi">10.1207/s15326969eco0501_1</pub-id>
</element-citation>
</ref>
<ref id="CR17">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Helmholtz</surname>
<given-names>H</given-names>
</name>
</person-group>
<source>On the Sensations of Tone as a physiological basis for the theory of music</source>
<year>1954</year>
<publisher-loc>New York</publisher-loc>
<publisher-name>Dover Publications</publisher-name>
</element-citation>
</ref>
<ref id="CR18">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Itoh</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Suwazono</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Nakada</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>Central auditory processing of noncontextual consonance in music: an evoked potential study</article-title>
<source>J Acoust Soc Am</source>
<year>2010</year>
<volume>128</volume>
<fpage>3781</fpage>
<lpage>3787</lpage>
<pub-id pub-id-type="doi">10.1121/1.3500685</pub-id>
<pub-id pub-id-type="pmid">21218909</pub-id>
</element-citation>
</ref>
<ref id="CR19">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ivry</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Cerebellar timing systems</article-title>
<source>Int Rev Neurobiol</source>
<year>1997</year>
<volume>41</volume>
<fpage>555</fpage>
<lpage>573</lpage>
<pub-id pub-id-type="doi">10.1016/S0074-7742(08)60370-0</pub-id>
<pub-id pub-id-type="pmid">9378608</pub-id>
</element-citation>
</ref>
<ref id="CR20">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kaiser</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Keller</surname>
<given-names>PE</given-names>
</name>
</person-group>
<article-title>Music’s impact on the visual perception of emotional dyadic interactions</article-title>
<source>Music Sci</source>
<year>2011</year>
<volume>15</volume>
<fpage>270</fpage>
<lpage>287</lpage>
<pub-id pub-id-type="doi">10.1177/1029864911401173</pub-id>
</element-citation>
</ref>
<ref id="CR21">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Brain correlates of music-evoked emotions</article-title>
<source>Nat Rev Neurosci</source>
<year>2014</year>
<volume>15</volume>
<fpage>170</fpage>
<lpage>180</lpage>
<pub-id pub-id-type="doi">10.1038/nrn3666</pub-id>
<pub-id pub-id-type="pmid">24552785</pub-id>
</element-citation>
</ref>
<ref id="CR22">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Fritz</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Cramon</surname>
<given-names>V</given-names>
</name>
<name>
<surname>DY</surname>
</name>
<etal></etal>
</person-group>
<article-title>Investigating emotion with music: an fMRI study</article-title>
<source>Hum Brain Mapp</source>
<year>2006</year>
<volume>27</volume>
<fpage>239</fpage>
<lpage>250</lpage>
<pub-id pub-id-type="doi">10.1002/hbm.20180</pub-id>
<pub-id pub-id-type="pmid">16078183</pub-id>
</element-citation>
</ref>
<ref id="CR23">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Krohn</surname>
<given-names>KI</given-names>
</name>
<name>
<surname>Brattico</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Välimäki</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Neural representations of the hierarchical scale pitch structure</article-title>
<source>Music Percept Interdiscip J</source>
<year>2007</year>
<volume>24</volume>
<fpage>281</fpage>
<lpage>296</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2007.24.3.281</pub-id>
</element-citation>
</ref>
<ref id="CR24">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Krumhansl</surname>
<given-names>CL</given-names>
</name>
</person-group>
<source>Cognitive Foundations of Musical Pitch</source>
<year>1990</year>
<publisher-loc>New York</publisher-loc>
<publisher-name>Oxford University Press</publisher-name>
</element-citation>
</ref>
<ref id="CR25">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Guiding movement by coupling taus</article-title>
<source>Ecol Psychol</source>
<year>1998</year>
<volume>10</volume>
<fpage>221</fpage>
<lpage>250</lpage>
<pub-id pub-id-type="doi">10.1080/10407413.1998.9652683</pub-id>
</element-citation>
</ref>
<ref id="CR26">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lehne</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Rohrmeier</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Gollmann</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>The influence of different structural features on felt musical tension in two piano pieces by Mozart and Mendelssohn</article-title>
<source>Music Percept Interdiscip J</source>
<year>2013</year>
<volume>31</volume>
<fpage>171</fpage>
<lpage>185</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2013.31.2.171</pub-id>
</element-citation>
</ref>
<ref id="CR27">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lehne</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Rohrmeier</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Tension-related activity in the orbitofrontal cortex and amygdala: an fMRI study with music</article-title>
<source>Soc Cogn Affect Neurosci</source>
<year>2014</year>
<volume>9</volume>
<fpage>1515</fpage>
<lpage>1523</lpage>
<pub-id pub-id-type="doi">10.1093/scan/nst141</pub-id>
<pub-id pub-id-type="pmid">23974947</pub-id>
</element-citation>
</ref>
<ref id="CR28">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lim</surname>
<given-names>I</given-names>
</name>
<name>
<surname>van Wegen</surname>
<given-names>E</given-names>
</name>
<name>
<surname>de Goede</surname>
<given-names>C</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Effects of external rhythmical cueing on gait in patients with Parkinson’s disease: a systematic review</article-title>
<source>Clin Rehabil</source>
<year>2005</year>
<volume>19</volume>
<fpage>695</fpage>
<lpage>713</lpage>
<pub-id pub-id-type="doi">10.1191/0269215505cr906oa</pub-id>
<pub-id pub-id-type="pmid">16250189</pub-id>
</element-citation>
</ref>
<ref id="CR29">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Masataka</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>Preference for consonance over dissonance by hearing newborns of deaf parents and of hearing parents</article-title>
<source>Dev Sci</source>
<year>2006</year>
<volume>9</volume>
<fpage>46</fpage>
<lpage>50</lpage>
<pub-id pub-id-type="doi">10.1111/j.1467-7687.2005.00462.x</pub-id>
<pub-id pub-id-type="pmid">16445395</pub-id>
</element-citation>
</ref>
<ref id="CR30">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maslennikova</surname>
<given-names>AV</given-names>
</name>
<name>
<surname>Varlamov</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Strelets</surname>
<given-names>VB</given-names>
</name>
</person-group>
<article-title>Evoked changes in EEG band power on perception of consonant and dissonant chords</article-title>
<source>Neurosci Behav Physiol</source>
<year>2013</year>
<volume>43</volume>
<fpage>670</fpage>
<lpage>673</lpage>
<pub-id pub-id-type="doi">10.1007/s11055-013-9790-4</pub-id>
</element-citation>
</ref>
<ref id="CR31">
<mixed-citation publication-type="other">Mckinney MF, Tramo MJ, Delgutte B (2001) Neural correl music dissonance Infer colliculus. Physiological psychophysical bases auditory function In: Breebaart DJ, Houtsma AJM, Kohlrausch A, Prijs VF, Schoonhoven R, (eds) Neural correlates of musical dissonance in the inferior colliculus, pp 83–89</mixed-citation>
</ref>
<ref id="CR32">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Minati</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Rosazza</surname>
<given-names>C</given-names>
</name>
<name>
<surname>D’Incerti</surname>
<given-names>L</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Functional MRI/event-related potential study of sensory consonance and dissonance in musicians and nonmusicians</article-title>
<source>Neuroreport</source>
<year>2009</year>
<volume>20</volume>
<fpage>87</fpage>
<lpage>92</lpage>
<pub-id pub-id-type="doi">10.1097/WNR.0b013e32831af235</pub-id>
<pub-id pub-id-type="pmid">19033878</pub-id>
</element-citation>
</ref>
<ref id="CR33">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Phillips</surname>
<given-names>DP</given-names>
</name>
<name>
<surname>Hall</surname>
<given-names>SE</given-names>
</name>
<name>
<surname>Boehnke</surname>
<given-names>SE</given-names>
</name>
</person-group>
<article-title>Central auditory onset responses, and temporal asymmetries in auditory perception</article-title>
<source>Hear Res</source>
<year>2002</year>
<volume>167</volume>
<fpage>192</fpage>
<lpage>205</lpage>
<pub-id pub-id-type="doi">10.1016/S0378-5955(02)00393-3</pub-id>
<pub-id pub-id-type="pmid">12117542</pub-id>
</element-citation>
</ref>
<ref id="CR34">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Repp</surname>
<given-names>BH</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>YH</given-names>
</name>
</person-group>
<article-title>Sensorimotor synchronization: a review of recent research (2006-2012)</article-title>
<source>Psychon Bull Rev</source>
<year>2013</year>
<volume>20</volume>
<fpage>403</fpage>
<lpage>452</lpage>
<pub-id pub-id-type="doi">10.3758/s13423-012-0371-2</pub-id>
<pub-id pub-id-type="pmid">23397235</pub-id>
</element-citation>
</ref>
<ref id="CR35">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rodger</surname>
<given-names>MWM</given-names>
</name>
<name>
<surname>Craig</surname>
<given-names>CM</given-names>
</name>
</person-group>
<article-title>Timing movements to interval durations specified by discrete or continuous sounds</article-title>
<source>Exp Brain Res</source>
<year>2011</year>
<volume>214</volume>
<fpage>393</fpage>
<lpage>402</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-011-2837-2</pub-id>
<pub-id pub-id-type="pmid">21858501</pub-id>
</element-citation>
</ref>
<ref id="CR36">
<mixed-citation publication-type="other">Rodger MWM, Craig CM (2013)Moving with Beats and Loops : the Structure of Auditory Events and Sensorimotor Timing. Proc 10th International Symposium Computer Music Multidiscipline Research Marseille, Friday Oct 15–18, 1–13</mixed-citation>
</ref>
<ref id="CR37">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rodger</surname>
<given-names>MWM</given-names>
</name>
<name>
<surname>Young</surname>
<given-names>WR</given-names>
</name>
<name>
<surname>Craig</surname>
<given-names>CM</given-names>
</name>
</person-group>
<article-title>Synthesis of walking sounds for alleviating gait disturbances in Parkinson’s disease</article-title>
<source>IEEE Trans Neural Syst Rehabil Eng</source>
<year>2013</year>
<volume>22</volume>
<issue>3</issue>
<fpage>543</fpage>
<lpage>548</lpage>
<pub-id pub-id-type="doi">10.1109/TNSRE.2013.2285410</pub-id>
<pub-id pub-id-type="pmid">24235275</pub-id>
</element-citation>
</ref>
<ref id="CR38">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sammler</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Grigutsch</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Fritz</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Music and emotion: electrophysiological correlates of the processing of pleasant and unpleasant music</article-title>
<source>Psychophysiology</source>
<year>2007</year>
<volume>44</volume>
<fpage>293</fpage>
<lpage>304</lpage>
<pub-id pub-id-type="doi">10.1111/j.1469-8986.2007.00497.x</pub-id>
<pub-id pub-id-type="pmid">17343712</pub-id>
</element-citation>
</ref>
<ref id="CR39">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Satoh</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kuzuhara</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Training in mental singing while walking improves gait disturbance in Parkinson’s disease patients</article-title>
<source>Eur Neurol</source>
<year>2008</year>
<volume>60</volume>
<fpage>237</fpage>
<lpage>243</lpage>
<pub-id pub-id-type="doi">10.1159/000151699</pub-id>
<pub-id pub-id-type="pmid">18756088</pub-id>
</element-citation>
</ref>
<ref id="CR40">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schwartz</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Howe</surname>
<given-names>CQ</given-names>
</name>
<name>
<surname>Purves</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>The statistical structure of human speech sounds predicts musical universals</article-title>
<source>J Neurosci</source>
<year>2003</year>
<volume>23</volume>
<fpage>7160</fpage>
<lpage>7168</lpage>
<pub-id pub-id-type="pmid">12904476</pub-id>
</element-citation>
</ref>
<ref id="CR41">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sievers</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Polansky</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Casey</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Wheatley</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>Music and movement share a dynamic structure that supports universal expressions of emotion</article-title>
<source>Proc Natl Acad Sci USA</source>
<year>2013</year>
<volume>110</volume>
<fpage>70</fpage>
<lpage>75</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.1209023110</pub-id>
<pub-id pub-id-type="pmid">23248314</pub-id>
</element-citation>
</ref>
<ref id="CR42">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Sorce</surname>
<given-names>R</given-names>
</name>
</person-group>
<source>Music Theory for the Music Professional</source>
<year>1995</year>
<publisher-loc>New York</publisher-loc>
<publisher-name>Ardsley House</publisher-name>
</element-citation>
</ref>
<ref id="CR43">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Styns</surname>
<given-names>F</given-names>
</name>
<name>
<surname>van Noorden</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Moelants</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Leman</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Walking on music</article-title>
<source>Hum Mov Sci</source>
<year>2007</year>
<volume>26</volume>
<fpage>769</fpage>
<lpage>785</lpage>
<pub-id pub-id-type="doi">10.1016/j.humov.2007.07.007</pub-id>
<pub-id pub-id-type="pmid">17910985</pub-id>
</element-citation>
</ref>
<ref id="CR44">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thaut</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Music versus metronome timekeeper in a rhythmic motor task</article-title>
<source>Int J arts Med</source>
<year>1997</year>
<volume>5</volume>
<fpage>4</fpage>
<lpage>12</lpage>
</element-citation>
</ref>
<ref id="CR45">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thaut</surname>
<given-names>MH</given-names>
</name>
<name>
<surname>Abiru</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Rhythmic auditory stimulation in rehabilitation of movement disorders: a review of current research</article-title>
<source>Music Percept</source>
<year>2010</year>
<volume>27</volume>
<fpage>263</fpage>
<lpage>269</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2010.27.4.263</pub-id>
</element-citation>
</ref>
<ref id="CR46">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tierney</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Kraus</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>The ability to move to a beat is linked to the consistency of neural responses to sound</article-title>
<source>J Neurosci</source>
<year>2013</year>
<volume>33</volume>
<fpage>14981</fpage>
<lpage>14988</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0612-13.2013</pub-id>
<pub-id pub-id-type="pmid">24048827</pub-id>
</element-citation>
</ref>
<ref id="CR47">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Janata</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Bharucha</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<article-title>Activation of the inferior frontal cortex in musical priming</article-title>
<source>Ann NY Acad Sci</source>
<year>2003</year>
<volume>999</volume>
<fpage>209</fpage>
<lpage>211</lpage>
<pub-id pub-id-type="doi">10.1196/annals.1284.031</pub-id>
<pub-id pub-id-type="pmid">14681143</pub-id>
</element-citation>
</ref>
<ref id="CR48">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Trainor</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Tsang</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Cheung</surname>
<given-names>V</given-names>
</name>
</person-group>
<article-title>Preference for sensory consonance in 2- and 4-month-old infants</article-title>
<source>Music Percept</source>
<year>2002</year>
<volume>20</volume>
<fpage>187</fpage>
<lpage>194</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2002.20.2.187</pub-id>
</element-citation>
</ref>
<ref id="CR49">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tramo</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Cariani</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Delgutte</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Braida</surname>
<given-names>LD</given-names>
</name>
</person-group>
<article-title>Neurobiological foundations for the theory of harmony in western tonal music</article-title>
<source>Ann NY Acad Sci</source>
<year>2001</year>
<volume>930</volume>
<fpage>92</fpage>
<lpage>116</lpage>
<pub-id pub-id-type="doi">10.1111/j.1749-6632.2001.tb05727.x</pub-id>
<pub-id pub-id-type="pmid">11458869</pub-id>
</element-citation>
</ref>
<ref id="CR50">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vos</surname>
<given-names>PG</given-names>
</name>
<name>
<surname>Troost</surname>
<given-names>JIMM</given-names>
</name>
</person-group>
<article-title>Ascending and descending melodic intervals : statistical findings and their perceptual relevance</article-title>
<source>Music Percept</source>
<year>1989</year>
<volume>6</volume>
<fpage>383</fpage>
<lpage>396</lpage>
<pub-id pub-id-type="doi">10.2307/40285439</pub-id>
</element-citation>
</ref>
<ref id="CR51">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wing</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Kristofferson</surname>
<given-names>AB</given-names>
</name>
</person-group>
<article-title>The timing of interresponse intervals</article-title>
<source>Percept Psychophys</source>
<year>1973</year>
<volume>13</volume>
<fpage>455</fpage>
<lpage>460</lpage>
<pub-id pub-id-type="doi">10.3758/BF03205802</pub-id>
</element-citation>
</ref>
<ref id="CR52">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wittwer</surname>
<given-names>JE</given-names>
</name>
<name>
<surname>Webster</surname>
<given-names>KE</given-names>
</name>
<name>
<surname>Hill</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Music and metronome cues produce different effects on gait spatiotemporal measures but not gait variability in healthy older adults</article-title>
<source>Gait Posture</source>
<year>2013</year>
<volume>37</volume>
<fpage>219</fpage>
<lpage>222</lpage>
<pub-id pub-id-type="doi">10.1016/j.gaitpost.2012.07.006</pub-id>
<pub-id pub-id-type="pmid">22871238</pub-id>
</element-citation>
</ref>
<ref id="CR53">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Young</surname>
<given-names>WR</given-names>
</name>
<name>
<surname>Rodger</surname>
<given-names>MWM</given-names>
</name>
<name>
<surname>Craig</surname>
<given-names>CM</given-names>
</name>
</person-group>
<article-title>Perceiving and re-enacting spatio-temporal characteristics of walking sounds</article-title>
<source>J Exp Psychol Hum Percept Perform</source>
<year>2013</year>
<volume>39</volume>
<fpage>464</fpage>
<lpage>476</lpage>
<pub-id pub-id-type="doi">10.1037/a0029402</pub-id>
<pub-id pub-id-type="pmid">22866760</pub-id>
</element-citation>
</ref>
<ref id="CR54">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Young</surname>
<given-names>WR</given-names>
</name>
<name>
<surname>Rodger</surname>
<given-names>MWM</given-names>
</name>
<name>
<surname>Craig</surname>
<given-names>CM</given-names>
</name>
</person-group>
<article-title>Auditory observation of stepping actions can cue both spatial and temporal components of gait in Parkinson׳s disease patients</article-title>
<source>Neuropsychologia</source>
<year>2014</year>
<volume>57</volume>
<fpage>140</fpage>
<lpage>153</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2014.03.009</pub-id>
<pub-id pub-id-type="pmid">24680722</pub-id>
</element-citation>
</ref>
<ref id="CR55">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zentner</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Eerola</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>Rhythmic engagement with music in infancy</article-title>
<source>Proc Natl Acad Sci USA</source>
<year>2010</year>
<volume>107</volume>
<fpage>5768</fpage>
<lpage>5773</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.1000121107</pub-id>
<pub-id pub-id-type="pmid">20231438</pub-id>
</element-citation>
</ref>
<ref id="CR56">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zentner</surname>
<given-names>MR</given-names>
</name>
<name>
<surname>Kagan</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Infants’ perception of consonance and dissonance in music</article-title>
<source>Infant Behav Dev</source>
<year>1998</year>
<volume>21</volume>
<fpage>483</fpage>
<lpage>492</lpage>
<pub-id pub-id-type="doi">10.1016/S0163-6383(98)90021-2</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Italie</li>
<li>Pays-Bas</li>
</country>
</list>
<tree>
<noCountry>
<name sortKey="Craig, Cathy M" sort="Craig, Cathy M" uniqKey="Craig C" first="Cathy M." last="Craig">Cathy M. Craig</name>
<name sortKey="Rodger, Matthew W M" sort="Rodger, Matthew W M" uniqKey="Rodger M" first="Matthew W. M." last="Rodger">Matthew W. M. Rodger</name>
</noCountry>
<country name="Italie">
<noRegion>
<name sortKey="Komeilipoor, Naeem" sort="Komeilipoor, Naeem" uniqKey="Komeilipoor N" first="Naeem" last="Komeilipoor">Naeem Komeilipoor</name>
</noRegion>
<name sortKey="Cesari, Paola" sort="Cesari, Paola" uniqKey="Cesari P" first="Paola" last="Cesari">Paola Cesari</name>
</country>
<country name="Pays-Bas">
<noRegion>
<name sortKey="Komeilipoor, Naeem" sort="Komeilipoor, Naeem" uniqKey="Komeilipoor N" first="Naeem" last="Komeilipoor">Naeem Komeilipoor</name>
</noRegion>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/MozartV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000072 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000072 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    MozartV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4369290
   |texte=   (Dis-)Harmony in movement: effects of musical dissonance on movement timing and form
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:25725774" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a MozartV1 

Wicri

This area was generated with Dilib version V0.6.20.
Data generation: Sun Apr 10 15:06:14 2016. Site generation: Tue Feb 7 15:40:35 2023