Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time

Identifieur interne : 003210 ( Ncbi/Merge ); précédent : 003209; suivant : 003211

Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time

Auteurs : Mats B. Küssner ; Dan Tidhar ; Helen M. Prior ; Daniel Leech-Wilkinson

Source :

RBID : PMC:4112934

Abstract

Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided.


Url:
DOI: 10.3389/fpsyg.2014.00789
PubMed: 25120506
PubMed Central: 4112934

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4112934

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time</title>
<author>
<name sortKey="Kussner, Mats B" sort="Kussner, Mats B" uniqKey="Kussner M" first="Mats B." last="Küssner">Mats B. Küssner</name>
</author>
<author>
<name sortKey="Tidhar, Dan" sort="Tidhar, Dan" uniqKey="Tidhar D" first="Dan" last="Tidhar">Dan Tidhar</name>
</author>
<author>
<name sortKey="Prior, Helen M" sort="Prior, Helen M" uniqKey="Prior H" first="Helen M." last="Prior">Helen M. Prior</name>
</author>
<author>
<name sortKey="Leech Wilkinson, Daniel" sort="Leech Wilkinson, Daniel" uniqKey="Leech Wilkinson D" first="Daniel" last="Leech-Wilkinson">Daniel Leech-Wilkinson</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25120506</idno>
<idno type="pmc">4112934</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4112934</idno>
<idno type="RBID">PMC:4112934</idno>
<idno type="doi">10.3389/fpsyg.2014.00789</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">001F13</idno>
<idno type="wicri:Area/Pmc/Curation">001F13</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000A86</idno>
<idno type="wicri:Area/Ncbi/Merge">003210</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time</title>
<author>
<name sortKey="Kussner, Mats B" sort="Kussner, Mats B" uniqKey="Kussner M" first="Mats B." last="Küssner">Mats B. Küssner</name>
</author>
<author>
<name sortKey="Tidhar, Dan" sort="Tidhar, Dan" uniqKey="Tidhar D" first="Dan" last="Tidhar">Dan Tidhar</name>
</author>
<author>
<name sortKey="Prior, Helen M" sort="Prior, Helen M" uniqKey="Prior H" first="Helen M." last="Prior">Helen M. Prior</name>
</author>
<author>
<name sortKey="Leech Wilkinson, Daniel" sort="Leech Wilkinson, Daniel" uniqKey="Leech Wilkinson D" first="Daniel" last="Leech-Wilkinson">Daniel Leech-Wilkinson</name>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Athanasopoulos, G" uniqKey="Athanasopoulos G">G. Athanasopoulos</name>
</author>
<author>
<name sortKey="Moran, N" uniqKey="Moran N">N. Moran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ben Artzi, E" uniqKey="Ben Artzi E">E. Ben-Artzi</name>
</author>
<author>
<name sortKey="Marks, L E" uniqKey="Marks L">L. E. Marks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bernstein, I H" uniqKey="Bernstein I">I. H. Bernstein</name>
</author>
<author>
<name sortKey="Edelstein, B A" uniqKey="Edelstein B">B. A. Edelstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boersma, P" uniqKey="Boersma P">P. Boersma</name>
</author>
<author>
<name sortKey="Weenink, D" uniqKey="Weenink D">D. Weenink</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bregman, A S" uniqKey="Bregman A">A. S. Bregman</name>
</author>
<author>
<name sortKey="Steiger, H" uniqKey="Steiger H">H. Steiger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cabrera, D" uniqKey="Cabrera D">D. Cabrera</name>
</author>
<author>
<name sortKey="Morimoto, M" uniqKey="Morimoto M">M. Morimoto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Caramiaux, B" uniqKey="Caramiaux B">B. Caramiaux</name>
</author>
<author>
<name sortKey="Bevilacqua, F" uniqKey="Bevilacqua F">F. Bevilacqua</name>
</author>
<author>
<name sortKey="Bianco, T" uniqKey="Bianco T">T. Bianco</name>
</author>
<author>
<name sortKey="Schnell, N" uniqKey="Schnell N">N. Schnell</name>
</author>
<author>
<name sortKey="Houix, O" uniqKey="Houix O">O. Houix</name>
</author>
<author>
<name sortKey="Susini, P" uniqKey="Susini P">P. Susini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Casasanto, D" uniqKey="Casasanto D">D. Casasanto</name>
</author>
<author>
<name sortKey="Phillips, W" uniqKey="Phillips W">W. Phillips</name>
</author>
<author>
<name sortKey="Boroditsky, L" uniqKey="Boroditsky L">L. Boroditsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chiou, R" uniqKey="Chiou R">R. Chiou</name>
</author>
<author>
<name sortKey="Rich, A N" uniqKey="Rich A">A. N. Rich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Collier, W G" uniqKey="Collier W">W. G. Collier</name>
</author>
<author>
<name sortKey="Hubbard, T L" uniqKey="Hubbard T">T. L. Hubbard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Dreu, M" uniqKey="De Dreu M">M. De Dreu</name>
</author>
<author>
<name sortKey="Van Der Wilk, A" uniqKey="Van Der Wilk A">A. Van Der Wilk</name>
</author>
<author>
<name sortKey="Poppe, E" uniqKey="Poppe E">E. Poppe</name>
</author>
<author>
<name sortKey="Kwakkel, G" uniqKey="Kwakkel G">G. Kwakkel</name>
</author>
<author>
<name sortKey="Van Wegen, E" uniqKey="Van Wegen E">E. Van Wegen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deroy, O" uniqKey="Deroy O">O. Deroy</name>
</author>
<author>
<name sortKey="Auvray, M" uniqKey="Auvray M">M. Auvray</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dolscheid, S" uniqKey="Dolscheid S">S. Dolscheid</name>
</author>
<author>
<name sortKey="Hunnius, S" uniqKey="Hunnius S">S. Hunnius</name>
</author>
<author>
<name sortKey="Casasanto, D" uniqKey="Casasanto D">D. Casasanto</name>
</author>
<author>
<name sortKey="Majid, A" uniqKey="Majid A">A. Majid</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dolscheid, S" uniqKey="Dolscheid S">S. Dolscheid</name>
</author>
<author>
<name sortKey="Shayan, S" uniqKey="Shayan S">S. Shayan</name>
</author>
<author>
<name sortKey="Majid, A" uniqKey="Majid A">A. Majid</name>
</author>
<author>
<name sortKey="Casasanto, D" uniqKey="Casasanto D">D. Casasanto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eitan, Z" uniqKey="Eitan Z">Z. Eitan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eitan, Z" uniqKey="Eitan Z">Z. Eitan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eitan, Z" uniqKey="Eitan Z">Z. Eitan</name>
</author>
<author>
<name sortKey="Granot, R Y" uniqKey="Granot R">R. Y. Granot</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eitan, Z" uniqKey="Eitan Z">Z. Eitan</name>
</author>
<author>
<name sortKey="Granot, R Y" uniqKey="Granot R">R. Y. Granot</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eitan, Z" uniqKey="Eitan Z">Z. Eitan</name>
</author>
<author>
<name sortKey="Schupak, A" uniqKey="Schupak A">A. Schupak</name>
</author>
<author>
<name sortKey="Gotler, A" uniqKey="Gotler A">A. Gotler</name>
</author>
<author>
<name sortKey="Marks, L" uniqKey="Marks L">L. Marks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eitan, Z" uniqKey="Eitan Z">Z. Eitan</name>
</author>
<author>
<name sortKey="Schupak, A" uniqKey="Schupak A">A. Schupak</name>
</author>
<author>
<name sortKey="Marks, L E" uniqKey="Marks L">L. E. Marks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eitan, Z" uniqKey="Eitan Z">Z. Eitan</name>
</author>
<author>
<name sortKey="Timmers, R" uniqKey="Timmers R">R. Timmers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Evans, K K" uniqKey="Evans K">K. K. Evans</name>
</author>
<author>
<name sortKey="Treisman, A" uniqKey="Treisman A">A. Treisman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fry, B" uniqKey="Fry B">B. Fry</name>
</author>
<author>
<name sortKey="Reas, C" uniqKey="Reas C">C. Reas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gallace, A" uniqKey="Gallace A">A. Gallace</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="God Y, R I" uniqKey="God Y R">R. I. Godøy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="God Y, R I" uniqKey="God Y R">R. I. Godøy</name>
</author>
<author>
<name sortKey="Haga, E" uniqKey="Haga E">E. Haga</name>
</author>
<author>
<name sortKey="Jensenius, A R" uniqKey="Jensenius A">A. R. Jensenius</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huron, D" uniqKey="Huron D">D. Huron</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnson, M" uniqKey="Johnson M">M. Johnson</name>
</author>
<author>
<name sortKey="Larson, S" uniqKey="Larson S">S. Larson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kestenberg Amighi, J" uniqKey="Kestenberg Amighi J">J. Kestenberg-Amighi</name>
</author>
<author>
<name sortKey="Loman, S" uniqKey="Loman S">S. Loman</name>
</author>
<author>
<name sortKey="Lewis, P" uniqKey="Lewis P">P. Lewis</name>
</author>
<author>
<name sortKey="Sossin, K M" uniqKey="Sossin K">K. M. Sossin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kohn, D" uniqKey="Kohn D">D. Kohn</name>
</author>
<author>
<name sortKey="Eitan, Z" uniqKey="Eitan Z">Z. Eitan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kohn, D" uniqKey="Kohn D">D. Kohn</name>
</author>
<author>
<name sortKey="Eitan, Z" uniqKey="Eitan Z">Z. Eitan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kozak, M" uniqKey="Kozak M">M. Kozak</name>
</author>
<author>
<name sortKey="Nymoen, K" uniqKey="Nymoen K">K. Nymoen</name>
</author>
<author>
<name sortKey="God Y, R I" uniqKey="God Y R">R. I. Godøy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kussner, M B" uniqKey="Kussner M">M. B. Küssner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kussner, M B" uniqKey="Kussner M">M. B. Küssner</name>
</author>
<author>
<name sortKey="Leech Wilkinson, D" uniqKey="Leech Wilkinson D">D. Leech-Wilkinson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leech Wilkinson, D" uniqKey="Leech Wilkinson D">D. Leech-Wilkinson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lega, C" uniqKey="Lega C">C. Lega</name>
</author>
<author>
<name sortKey="Cattaneo, Z" uniqKey="Cattaneo Z">Z. Cattaneo</name>
</author>
<author>
<name sortKey="Merabet, L B" uniqKey="Merabet L">L. B. Merabet</name>
</author>
<author>
<name sortKey="Vecchi, T" uniqKey="Vecchi T">T. Vecchi</name>
</author>
<author>
<name sortKey="Cucchi, S" uniqKey="Cucchi S">S. Cucchi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewkowicz, D J" uniqKey="Lewkowicz D">D. J. Lewkowicz</name>
</author>
<author>
<name sortKey="Turkewitz, G" uniqKey="Turkewitz G">G. Turkewitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lidji, P" uniqKey="Lidji P">P. Lidji</name>
</author>
<author>
<name sortKey="Kolinsky, R" uniqKey="Kolinsky R">R. Kolinsky</name>
</author>
<author>
<name sortKey="Lochy, A" uniqKey="Lochy A">A. Lochy</name>
</author>
<author>
<name sortKey="Morais, J" uniqKey="Morais J">J. Morais</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lipscomb, S D" uniqKey="Lipscomb S">S. D. Lipscomb</name>
</author>
<author>
<name sortKey="Kim, E M" uniqKey="Kim E">E. M. Kim</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ludwig, V U" uniqKey="Ludwig V">V. U. Ludwig</name>
</author>
<author>
<name sortKey="Adachi, I" uniqKey="Adachi I">I. Adachi</name>
</author>
<author>
<name sortKey="Matsuzawa, T" uniqKey="Matsuzawa T">T. Matsuzawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maeda, F" uniqKey="Maeda F">F. Maeda</name>
</author>
<author>
<name sortKey="Kanai, R" uniqKey="Kanai R">R. Kanai</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maes, P J" uniqKey="Maes P">P.-J. Maes</name>
</author>
<author>
<name sortKey="Leman, M" uniqKey="Leman M">M. Leman</name>
</author>
<author>
<name sortKey="Palmer, C" uniqKey="Palmer C">C. Palmer</name>
</author>
<author>
<name sortKey="Wanderley, M M" uniqKey="Wanderley M">M. M. Wanderley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marks, L E" uniqKey="Marks L">L. E. Marks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martino, G" uniqKey="Martino G">G. Martino</name>
</author>
<author>
<name sortKey="Marks, L E" uniqKey="Marks L">L. E. Marks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Melara, R D" uniqKey="Melara R">R. D. Melara</name>
</author>
<author>
<name sortKey="O Brien, T P" uniqKey="O Brien T">T. P. O'Brien</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Micheyl, C" uniqKey="Micheyl C">C. Micheyl</name>
</author>
<author>
<name sortKey="Delhommeau, K" uniqKey="Delhommeau K">K. Delhommeau</name>
</author>
<author>
<name sortKey="Perrot, X" uniqKey="Perrot X">X. Perrot</name>
</author>
<author>
<name sortKey="Oxenham, A J" uniqKey="Oxenham A">A. J. Oxenham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, A" uniqKey="Miller A">A. Miller</name>
</author>
<author>
<name sortKey="Werner, H" uniqKey="Werner H">H. Werner</name>
</author>
<author>
<name sortKey="Wapner, S" uniqKey="Wapner S">S. Wapner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, J" uniqKey="Miller J">J. Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mondloch, C J" uniqKey="Mondloch C">C. J. Mondloch</name>
</author>
<author>
<name sortKey="Maurer, D" uniqKey="Maurer D">D. Maurer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mossbridge, J A" uniqKey="Mossbridge J">J. A. Mossbridge</name>
</author>
<author>
<name sortKey="Grabowecky, M" uniqKey="Grabowecky M">M. Grabowecky</name>
</author>
<author>
<name sortKey="Suzuki, S" uniqKey="Suzuki S">S. Suzuki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mudd, S A" uniqKey="Mudd S">S. A. Mudd</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neuhoff, J G" uniqKey="Neuhoff J">J. G. Neuhoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nymoen, K" uniqKey="Nymoen K">K. Nymoen</name>
</author>
<author>
<name sortKey="Caramiaux, B" uniqKey="Caramiaux B">B. Caramiaux</name>
</author>
<author>
<name sortKey="Kozak, M" uniqKey="Kozak M">M. Kozak</name>
</author>
<author>
<name sortKey="Torresen, J" uniqKey="Torresen J">J. Torresen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nymoen, K" uniqKey="Nymoen K">K. Nymoen</name>
</author>
<author>
<name sortKey="God Y, R I" uniqKey="God Y R">R. I. Godøy</name>
</author>
<author>
<name sortKey="Jensenius, A R" uniqKey="Jensenius A">A. R. Jensenius</name>
</author>
<author>
<name sortKey="Torresen, J" uniqKey="Torresen J">J. Torresen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patching, G R" uniqKey="Patching G">G. R. Patching</name>
</author>
<author>
<name sortKey="Quinlan, P T" uniqKey="Quinlan P">P. T. Quinlan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pedley, P E" uniqKey="Pedley P">P. E. Pedley</name>
</author>
<author>
<name sortKey="Harper, R S" uniqKey="Harper R">R. S. Harper</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pratt, C C" uniqKey="Pratt C">C. C. Pratt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rochester, L" uniqKey="Rochester L">L. Rochester</name>
</author>
<author>
<name sortKey="Baker, K" uniqKey="Baker K">K. Baker</name>
</author>
<author>
<name sortKey="Hetherington, V" uniqKey="Hetherington V">V. Hetherington</name>
</author>
<author>
<name sortKey="Jones, D" uniqKey="Jones D">D. Jones</name>
</author>
<author>
<name sortKey="Willems, A M" uniqKey="Willems A">A.-M. Willems</name>
</author>
<author>
<name sortKey="Kwakkel, G" uniqKey="Kwakkel G">G. Kwakkel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roffler, S K" uniqKey="Roffler S">S. K. Roffler</name>
</author>
<author>
<name sortKey="Butler, R A" uniqKey="Butler R">R. A. Butler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rusconi, E" uniqKey="Rusconi E">E. Rusconi</name>
</author>
<author>
<name sortKey="Kwan, B" uniqKey="Kwan B">B. Kwan</name>
</author>
<author>
<name sortKey="Giordano, B L" uniqKey="Giordano B">B. L. Giordano</name>
</author>
<author>
<name sortKey="Umilta, C" uniqKey="Umilta C">C. Umiltà</name>
</author>
<author>
<name sortKey="Butterworth, B" uniqKey="Butterworth B">B. Butterworth</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schubert, E" uniqKey="Schubert E">E. Schubert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smalley, D" uniqKey="Smalley D">D. Smalley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Deroy, O" uniqKey="Deroy O">O. Deroy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stern, D N" uniqKey="Stern D">D. N. Stern</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stewart, L" uniqKey="Stewart L">L. Stewart</name>
</author>
<author>
<name sortKey="Verdonschot, R G" uniqKey="Verdonschot R">R. G. Verdonschot</name>
</author>
<author>
<name sortKey="Nasralla, P" uniqKey="Nasralla P">P. Nasralla</name>
</author>
<author>
<name sortKey="Lanipekun, J" uniqKey="Lanipekun J">J. Lanipekun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stumpf, C" uniqKey="Stumpf C">C. Stumpf</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Suzuki, Y" uniqKey="Suzuki Y">Y. Suzuki</name>
</author>
<author>
<name sortKey="Takeshima, H" uniqKey="Takeshima H">H. Takeshima</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tan, S L" uniqKey="Tan S">S.-L. Tan</name>
</author>
<author>
<name sortKey="Cohen, A J" uniqKey="Cohen A">A. J. Cohen</name>
</author>
<author>
<name sortKey="Lipscomb, S D" uniqKey="Lipscomb S">S. D. Lipscomb</name>
</author>
<author>
<name sortKey="Kendall, R A" uniqKey="Kendall R">R. A. Kendall</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Taylor, J E" uniqKey="Taylor J">J. E. Taylor</name>
</author>
<author>
<name sortKey="Witt, J" uniqKey="Witt J">J. Witt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Todd, N P M" uniqKey="Todd N">N. P. M. Todd</name>
</author>
<author>
<name sortKey="Cody, F W J" uniqKey="Cody F">F. W. J. Cody</name>
</author>
<author>
<name sortKey="Banks, J R" uniqKey="Banks J">J. R. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trimble, O C" uniqKey="Trimble O">O. C. Trimble</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Dyck, E" uniqKey="Van Dyck E">E. Van Dyck</name>
</author>
<author>
<name sortKey="Moelants, D" uniqKey="Moelants D">D. Moelants</name>
</author>
<author>
<name sortKey="Demey, M" uniqKey="Demey M">M. Demey</name>
</author>
<author>
<name sortKey="Deweppe, A" uniqKey="Deweppe A">A. Deweppe</name>
</author>
<author>
<name sortKey="Coussement, P" uniqKey="Coussement P">P. Coussement</name>
</author>
<author>
<name sortKey="Leman, M" uniqKey="Leman M">M. Leman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Wijck, F" uniqKey="Van Wijck F">F. Van Wijck</name>
</author>
<author>
<name sortKey="Knox, D" uniqKey="Knox D">D. Knox</name>
</author>
<author>
<name sortKey="Dodds, C" uniqKey="Dodds C">C. Dodds</name>
</author>
<author>
<name sortKey="Cassidy, G" uniqKey="Cassidy G">G. Cassidy</name>
</author>
<author>
<name sortKey="Alexander, G" uniqKey="Alexander G">G. Alexander</name>
</author>
<author>
<name sortKey="Macdonald, R" uniqKey="Macdonald R">R. Macdonald</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vines, B W" uniqKey="Vines B">B. W. Vines</name>
</author>
<author>
<name sortKey="Krumhansl, C L" uniqKey="Krumhansl C">C. L. Krumhansl</name>
</author>
<author>
<name sortKey="Wanderley, M M" uniqKey="Wanderley M">M. M. Wanderley</name>
</author>
<author>
<name sortKey="Levitin, D J" uniqKey="Levitin D">D. J. Levitin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wagner, K" uniqKey="Wagner K">K. Wagner</name>
</author>
<author>
<name sortKey="Dobkins, K R" uniqKey="Dobkins K">K. R. Dobkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Walker, P" uniqKey="Walker P">P. Walker</name>
</author>
<author>
<name sortKey="Bremner, J G" uniqKey="Bremner J">J. G. Bremner</name>
</author>
<author>
<name sortKey="Mason, U" uniqKey="Mason U">U. Mason</name>
</author>
<author>
<name sortKey="Spring, J" uniqKey="Spring J">J. Spring</name>
</author>
<author>
<name sortKey="Mattock, K" uniqKey="Mattock K">K. Mattock</name>
</author>
<author>
<name sortKey="Slater, A" uniqKey="Slater A">A. Slater</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Walker, P" uniqKey="Walker P">P. Walker</name>
</author>
<author>
<name sortKey="Smith, S" uniqKey="Smith S">S. Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Walker, R" uniqKey="Walker R">R. Walker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Widmann, A" uniqKey="Widmann A">A. Widmann</name>
</author>
<author>
<name sortKey="Kujala, T" uniqKey="Kujala T">T. Kujala</name>
</author>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
<author>
<name sortKey="Kujala, A" uniqKey="Kujala A">A. Kujala</name>
</author>
<author>
<name sortKey="Schroger, E" uniqKey="Schroger E">E. Schröger</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25120506</article-id>
<article-id pub-id-type="pmc">4112934</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2014.00789</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Küssner</surname>
<given-names>Mats B.</given-names>
</name>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/155000"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Tidhar</surname>
<given-names>Dan</given-names>
</name>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/173516"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Prior</surname>
<given-names>Helen M.</given-names>
</name>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/173502"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Leech-Wilkinson</surname>
<given-names>Daniel</given-names>
</name>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/173406"></uri>
</contrib>
</contrib-group>
<aff>
<institution>Department of Music, King's College London</institution>
<country>London, UK</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Eckart Altenmüller, University of Music and Drama Hannover, Germany</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Peter Cariani, Harvard Medical School, USA; Alfred Oliver Effenberg, Leibniz University Hannover, Germany</p>
</fn>
<corresp id="fn001">*Correspondence: Mats B. Küssner, Department of Music, King's College London, Strand, London WC2R 2LS, UK e-mail:
<email xlink:type="simple">mats.kussner@gmail.com</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Psychology.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>28</day>
<month>7</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>5</volume>
<elocation-id>789</elocation-id>
<history>
<date date-type="received">
<day>02</day>
<month>5</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>04</day>
<month>7</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2014 Küssner, Tidhar, Prior and Leech-Wilkinson.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided.</p>
</abstract>
<kwd-group>
<kwd>cross-modal mappings</kwd>
<kwd>gesture</kwd>
<kwd>embodied music cognition</kwd>
<kwd>musical training</kwd>
<kwd>real-time mappings</kwd>
</kwd-group>
<counts>
<fig-count count="4"></fig-count>
<table-count count="4"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="81"></ref-count>
<page-count count="15"></page-count>
<word-count count="12561"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<sec>
<title>Origin and shaping of cross-modal correspondences</title>
<p>Research on cross-modal correspondences has shown that people readily map features of auditory stimuli such as pitch and loudness onto the visual or visuo-spatial domain (for reviews see e.g., Marks,
<xref rid="B44" ref-type="bibr">2004</xref>
; Spence,
<xref rid="B64" ref-type="bibr">2011</xref>
; Eitan,
<xref rid="B15" ref-type="bibr">2013a</xref>
). The most extensively studied cross-modal correspondence—that of pitch height (henceforth “pitch”) and spatial height—has produced robust effects revealing that higher (lower) pitch is associated with higher (lower) elevation in space (Pratt,
<xref rid="B58" ref-type="bibr">1930</xref>
; Trimble,
<xref rid="B73" ref-type="bibr">1934</xref>
; Miller et al.,
<xref rid="B48" ref-type="bibr">1958</xref>
; Pedley and Harper,
<xref rid="B57" ref-type="bibr">1959</xref>
; Mudd,
<xref rid="B52" ref-type="bibr">1963</xref>
; Roffler and Butler,
<xref rid="B60" ref-type="bibr">1968</xref>
; Bernstein and Edelstein,
<xref rid="B3" ref-type="bibr">1971</xref>
; Bregman and Steiger,
<xref rid="B5" ref-type="bibr">1980</xref>
; Melara and O'Brien,
<xref rid="B46" ref-type="bibr">1987</xref>
; Walker,
<xref rid="B80" ref-type="bibr">1987</xref>
; Miller,
<xref rid="B49" ref-type="bibr">1991</xref>
; Ben-Artzi and Marks,
<xref rid="B2" ref-type="bibr">1995</xref>
; Patching and Quinlan,
<xref rid="B56" ref-type="bibr">2002</xref>
; Casasanto et al.,
<xref rid="B8" ref-type="bibr">2003</xref>
; Widmann et al.,
<xref rid="B81" ref-type="bibr">2004</xref>
; Rusconi et al.,
<xref rid="B61" ref-type="bibr">2006</xref>
; Cabrera and Morimoto,
<xref rid="B6" ref-type="bibr">2007</xref>
; Mossbridge et al.,
<xref rid="B51" ref-type="bibr">2011</xref>
). It is unclear, however, what exactly the reason for this cross-modal correspondence is. Different causes of cross-modal mappings have been proposed, e.g., macro-level factors such as development, statistical learning, or culture more generally, and micro-level factors pertaining to experimental paradigms and stimuli selection.</p>
<p>With regard to the impact of culture, there is evidence that the kinds of mappings adults display are influenced by language (Dolscheid et al.,
<xref rid="B14" ref-type="bibr">2013</xref>
), emphasizing the importance of conceptual metaphor (Johnson and Larson,
<xref rid="B29" ref-type="bibr">2003</xref>
; Eitan and Timmers,
<xref rid="B21" ref-type="bibr">2010</xref>
), which had already been identified by Carl Stumpf as the key mechanism underlying spatial mappings of pitch (Stumpf,
<xref rid="B68" ref-type="bibr">1883</xref>
). Another cultural factor is musical training: trained individuals map auditory features more consistently than untrained individuals, but the kinds of mappings remain consistent across most Western individuals (Eitan and Granot,
<xref rid="B17" ref-type="bibr">2006</xref>
; Küssner and Leech-Wilkinson,
<xref rid="B35" ref-type="bibr">2014</xref>
). While culture, and particularly language, thus plays a pivotal role in
<italic>shaping</italic>
cross-modal correspondences, a growing body of research suggests that their
<italic>origin</italic>
is to be found elsewhere (but see also Deroy and Auvray,
<xref rid="B12" ref-type="bibr">2013</xref>
). For instance, studies with infants indicate that 3–4-month-olds show pitch vs. height and pitch vs. sharpness associations (Walker et al.,
<xref rid="B78" ref-type="bibr">2010</xref>
), 4-month-olds pitch vs. height and pitch vs. thickness associations (Dolscheid et al.,
<xref rid="B13" ref-type="bibr">2012</xref>
), and 3–4-week-olds loudness vs. brightness associations (Lewkowicz and Turkewitz,
<xref rid="B38" ref-type="bibr">1980</xref>
). Combined with evidence from audio-visual mappings in non-human mammals (Ludwig et al.,
<xref rid="B41" ref-type="bibr">2011</xref>
), this has led some scholars to conclude that cross-modal correspondences are innate, possibly based on a wide range of neural connections that are gradually lost due to synaptic pruning (Mondloch and Maurer,
<xref rid="B50" ref-type="bibr">2004</xref>
; Wagner and Dobkins,
<xref rid="B77" ref-type="bibr">2011</xref>
). Others have argued that cross-modal correspondences may be learned rapidly through external, non-linguistic stimulation (Ernst,
<xref rid="B22" ref-type="bibr">2007</xref>
; as discussed in Spence,
<xref rid="B64" ref-type="bibr">2011</xref>
) or may be acquired indirectly in cases where the occurrence of cross-modal pairings in the environment seems unlikely (Spence and Deroy,
<xref rid="B65" ref-type="bibr">2012</xref>
). Further evidence supporting the prelinguistic origin hypothesis comes from studies showing that cross-modal mappings are processed at an early, perceptual level (Maeda et al.,
<xref rid="B42" ref-type="bibr">2004</xref>
; Evans and Treisman,
<xref rid="B23" ref-type="bibr">2010</xref>
), unmediated by later, semantic processing (Martino and Marks,
<xref rid="B45" ref-type="bibr">1999</xref>
; but see also Chiou and Rich,
<xref rid="B9" ref-type="bibr">2012</xref>
). Finally, it is important to highlight the role of common neural substrates of cross-modal correspondences (Spence,
<xref rid="B64" ref-type="bibr">2011</xref>
), which might be best accounted for by neurocomputational models.</p>
</sec>
<sec>
<title>Complexity of audio-visuo-spatial correspondences</title>
<p>As implied above, cross-modal correspondences between auditory features and the visuo-spatial domain are manifold, sometimes referred to as one-to-many and many-to-one correspondences (Eitan,
<xref rid="B15" ref-type="bibr">2013a</xref>
). For instance, pitch has been associated with vertical height (Walker,
<xref rid="B80" ref-type="bibr">1987</xref>
), distance (Eitan and Granot,
<xref rid="B17" ref-type="bibr">2006</xref>
), speed (Walker and Smith,
<xref rid="B79" ref-type="bibr">1986</xref>
), size (Mondloch and Maurer,
<xref rid="B50" ref-type="bibr">2004</xref>
) and brightness (Collier and Hubbard,
<xref rid="B10" ref-type="bibr">1998</xref>
)—i.e., one-to-many—while the same associations have been found for loudness (Lewkowicz and Turkewitz,
<xref rid="B38" ref-type="bibr">1980</xref>
; Neuhoff,
<xref rid="B53" ref-type="bibr">2001</xref>
; Lipscomb and Kim,
<xref rid="B40" ref-type="bibr">2004</xref>
; Eitan et al.,
<xref rid="B20" ref-type="bibr">2008</xref>
; Kohn and Eitan,
<xref rid="B31" ref-type="bibr">2009</xref>
), rendering, for example, pitch/loudness vs. height a many-to-one correspondence. The full story is, however, more complex than that, as outlined in Eitan (
<xref rid="B15" ref-type="bibr">2013a</xref>
). First, the type of auditory stimuli, whether static or dynamic, can give rise to opposing results. For instance, static high and low pitches paired with small and large visual disks, respectively, have been shown to enhance performance in a speeded classification paradigm (Gallace and Spence,
<xref rid="B25" ref-type="bibr">2006</xref>
), providing evidence that high pitch is associated with small objects and low pitch with large objects. On the other hand, Eitan et al. (
<xref rid="B19" ref-type="bibr">2013</xref>
), using a similar paradigm, demonstrated that rising pitches paired with an increasing visual object and falling pitches paired with a decreasing visual object yielded significantly faster responses than rising pitches paired with a decreasing visual object and falling pitches with an increasing visual object. Secondly, manipulating several auditory features concurrently influences participants' cross-modal images of motion (Eitan and Granot,
<xref rid="B18" ref-type="bibr">2011</xref>
). For instance, an increase in tempo, usually associated with an increase in speed, did not lead to an increase in speed when loudness was concurrently decreasing. Similarly, a rise in pitch, usually associated with an increase in vertical position, led to a
<italic>decrease</italic>
in vertical position when loudness was concurrently decreasing. Since environmental sounds, but especially music, are very often varied
<italic>dynamically and concurrently</italic>
in pitch, loudness, tempo, timbre etc., investigating cross-modal correspondences of these features—which is frequently done by manipulating them in isolation, entailing obvious experimental advantages but also the even more obvious lack of ecological validity—requires approaches taking into consideration the multiple dynamic co-variations of sound features.</p>
</sec>
<sec>
<title>Cross-modal mappings of sound involving real or imagined bodily movements</title>
<p>Whereas most experimental paradigms to date have used speeded identification, speeded classification or forced-choice matching tasks, researchers have recently begun to apply paradigms involving real-time drawings (Küssner and Leech-Wilkinson,
<xref rid="B35" ref-type="bibr">2014</xref>
), gestures (Kozak et al.,
<xref rid="B33" ref-type="bibr">2002</xref>
) or imagined bodily movements (Eitan and Granot,
<xref rid="B17" ref-type="bibr">2006</xref>
), in order to delineate a more differentiated picture of cross-modal mappings. Asking participants to imagine the movements of a humanoid character in response to changes in a range of musical parameters, Eitan and Granot (
<xref rid="B17" ref-type="bibr">2006</xref>
) found that pitch is mapped onto all three spatial axes, including asymmetric pitch vs. height mappings such that decreasing pitch was more strongly associated with descending movements than increasing pitch with ascending movements. Similarly, the authors report two asymmetric mappings of loudness: (1) decreasing loudness was more strongly associated with spatial descent than increasing loudness with spatial ascent, and (2) increasing loudness was more strongly associated with accelerating movements than decreasing loudness with decelerating movements. What is more, results from a study investigating participants' perceptions of the congruency between vertical arm movements and changes in pitch and loudness revealed that concurrent rising–falling movements of one's arm and pitch or loudness gave rise to higher ratings than concurrent falling–rising movements (Kohn and Eitan,
<xref rid="B32" ref-type="bibr">2012</xref>
). These striking asymmetries might be part of a discrepancy between response-time or rating paradigms and those involving more extensive, overt bodily movements.</p>
<p>A few studies only have investigated how changes in auditory stimuli are mapped onto real bodily movements. In an exploratory study, Godøy et al. (
<xref rid="B27" ref-type="bibr">2006</xref>
) asked participants to respond with hand gestures—captured with a pen on an electronic graphics tablet—to a set of auditory stimuli that comprised instrumental, electronic and environmental sounds and was classified according to a typology developed by Pierre Schaeffer (e.g., impulsive, continuous and iterative sounds). While the authors report a “fair amount of consistency in some of the responses” such as ascending movements for increasing pitch, they do stress the need for large-scale studies involving the investigation of free movements in three-dimensional space as well as of the influence of musical training. In a subsequent study from the same group, Nymoen et al. (
<xref rid="B54" ref-type="bibr">2011</xref>
) found strong associations between pitch and vertical movements, between loudness and speed, and between loudness and horizontal movements, when comparing people's gestural responses to pitched and non-pitched sounds, captured by moving a rod whose movements were supposed to represent sound-producing gestures. While the authors' argument for “a one-dimensional intrinsic relationship between pitch and vertical position” is conceivable in view of their findings, the lack of bidirectional pitch changes (e.g., rising–falling contour) within their auditory stimuli precludes conclusions about potential asymmetric mappings of pitch with bodily movements.</p>
<p>In a similar experiment, Caramiaux et al. (
<xref rid="B7" ref-type="bibr">2014</xref>
) compared hand gestures in response to action and non-action related sounds, confirming their hypothesis that the former would entail sound-producing gestures while the latter would result in gestures representing the sound's spectromorphology (Smalley,
<xref rid="B63" ref-type="bibr">1997</xref>
), i.e., the overall sonic shape. Comparing speed profiles between participants revealed that they were more similar for non-action- than action-related sounds. This shows—and is supported by analysis of interviews carried out with the participants—that once a particular action (e.g., crushing a metallic can) has been identified, the realization of the accompanying gesture is highly idiosyncratic. On the other hand, non-action-related sounds, which are particularly pertinent to the present study, gave rise to more consistent gestural responses.</p>
<p>One study has been carried out investigating free representational movements to sound, in which 5- and 8-year-old children were presented with auditory stimuli separately varied in pitch, loudness and tempo (Kohn and Eitan,
<xref rid="B31" ref-type="bibr">2009</xref>
). Three independent referees trained in Laban Movement Analysis rated the observed behavior—the sound being muted—according to the movement and direction along the x-, y- and z-axes, the muscular energy and the speed. Pitch was most strongly associated with the vertical axis, loudness with vertical axis and muscular energy, and tempo with speed and muscular energy. In terms of direction, changes in loudness and tempo gave rise to congruent movement patterns, that is, increasing loudness was represented with upward movement and higher muscular activity, whereas decreasing loudness was represented with downward movement and lower muscular activity. The direction of movement along the vertical axis in response to changes in pitch was congruent for increasing–decreasing pitch contours but not for decreasing–increasing contours. This finding is particularly relevant for the present study, as it highlights the asymmetric nature of bodily cross-modal mappings.</p>
</sec>
<sec>
<title>Aims and hypotheses</title>
<p>To sum up, there is currently a lack of studies investigating (a) how auditory stimuli concurrently varied in several sound features are mapped cross-modally and (b) how approaches involving gestural (i.e., bodily) responses affect cross-modal correspondences. To address this gap, and to provide a starting point for researchers to develop further testable hypotheses, the present exploratory study aims to identify how pitch, loudness and tempo are represented gesturally in real-time (i.e., occurring simultaneously with latencies <100 ms), and to what extent musical training influences those cross-modal mappings. Unlike studies investigating the influence of musicians' specializations on cross-modal mappings of sound—such as pianists' horizontal pitch mappings (Stewart et al.,
<xref rid="B67" ref-type="bibr">2013</xref>
; Lega et al.,
<xref rid="B37" ref-type="bibr">2014</xref>
, Exp. 2; Taylor and Witt,
<xref rid="B71" ref-type="bibr">2014</xref>
)—we are concerned with the influence of more generic musical skills (e.g., the ability to read music notation) acquired in contexts of formal music education, and thus aim to balance our trained participants' main musical activity more carefully than previous studies (Rusconi et al.,
<xref rid="B61" ref-type="bibr">2006</xref>
; Lidji et al.,
<xref rid="B39" ref-type="bibr">2007</xref>
). To our knowledge, this is the first controlled experiment studying adults' gestural responses to a set of pure tones systematically and concurrently varied in pitch, loudness and tempo. Based on the literature reviewed above, we hypothesize the following outcomes:</p>
<list list-type="order">
<list-item>
<p>Pitch is represented on the y-axis (higher elevation for higher pitches); rising–falling pitch contours (convex shapes) are expected to yield greater pitch vs. height associations than falling–rising pitch contours.</p>
</list-item>
<list-item>
<p>Loudness is represented with forward-backward movements along the z-axis and muscular energy (forward movement/more energy for louder sounds), as well as with spatial height when loudness is the only auditory feature being manipulated (higher elevation for louder sounds).</p>
</list-item>
<list-item>
<p>Tempo of pitch change in the auditory stimuli is represented by speed of the hand movements (faster movement for faster tempo) and muscular energy (more energy for faster tempo).</p>
</list-item>
<list-item>
<p>Musical training has an impact such that musically trained participants—due to their formalized engagement with musical parameters (e.g., through notation)—show generally more consistent mappings than musically untrained participants.</p>
</list-item>
</list>
</sec>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Participants</title>
<p>Sixty-four participants (32 female) took part in the experiment (age:
<italic>M</italic>
= 29.63 years,
<italic>SD</italic>
= 12.49 years, range: 18–74 years). Thirty-two participants (16 female) were classified as musically trained (age:
<italic>M</italic>
= 30.09 years,
<italic>SD</italic>
= 13.66 years, range: 18–74 years), and 32 (16 female) as musically untrained (age:
<italic>M</italic>
= 29.16 years,
<italic>SD</italic>
= 11.39 years, range: 18–67 years). All participants were required to be 18 years or over, right-handed, and must not have been diagnosed with any vision or hearing impairments (except those corrected to normal vision with glasses or contact lenses). To satisfy the “musically trained” category, participants must have played either a keyboard instrument, a string instrument, a wind/brass instrument or been a composer, must have had at least Grade 8 of the ABRSM system (
<ext-link ext-link-type="uri" xlink:href="http://gb.abrsm.org/en/home">http://gb.abrsm.org/en/home</ext-link>
) or an equivalent qualification, and must have spent at least 4 h per week on average playing their respective main instrument or composing. All musically trained participants were balanced by gender and main musical activity. Musically untrained participants must not have played any musical instrument or composed music for the past 6 years, must not have played any instrument for more than 2 years in total, and must not have exceeded Grade 1 ABRSM. Participants were recruited using a college-wide e-mail recruitment system including undergraduates, postgraduates and staff, as well as circulating a call for participants within music conservatoires. All criteria were clearly stated in the recruitment email and checked again with a questionnaire during the experiment. Exceptions included one trained participant who reported playing only 2 h on average per week and one untrained participant who reported engaging in musical activities (“electronics, drums, mixing”) for 7.5 h. Another musically untrained participant had played the guitar for 4 years in total but had stopped playing 14 years ago, and one untrained participant who had played drums for 1 year had only stopped 5 years ago. Since this study is concerned with differences arising from formal training, and none of the musically untrained participants had taken any formal music examination while all musically trained participants were at Grade 8 or above, it was decided to keep all participants for the analysis to ensure a balanced design and sufficient statistical power.</p>
</sec>
<sec>
<title>Stimuli</title>
<p>Stimuli (see Table
<xref ref-type="table" rid="T1">1</xref>
, Figure
<xref ref-type="fig" rid="F1">1</xref>
and Supplementary Material) were synthesized in SuperCollider (Version 3.5.1) and consisted of 21 continually sounding pure tones that varied in frequency, amplitude and tempo. All stimuli were 8 s long. For a pure tone stimulus, pitch height is the subjective quality that covaries with the frequency of the tone, other acoustic parameters held constant. Trough and peak pitches were B2 (123.47 Hz) and D4 (293.67 Hz), respectively, and all but three stimuli (Nos. 1–3) had a rising–falling (Nos. 4–12) or falling–rising (Nos. 13–21) pitch contours. Constant amplitude meant 50% of the maximum, whereas stimuli linearly decreasing and increasing in amplitude showed the pattern 90% – 10% – 90% (reaching 10% after 4 s) and stimuli linearly increasing and decreasing in amplitude showed the pattern 10% – 90% – 10% (reaching 90% after 4 s). Given 100% full scale amplitude = 0 dB, 50% = −3.01 dB, 90% = −0.46 dB, and 10% = −10 dB. Stimuli changing in pitch reached the top (bottom) after 3 s, before going into the opposite direction after 1 s and reaching the bottom (top) after 3 s and staying there for another second. The factors for change in tempo were −0.5 for decelerandi and 0.5 for accelerandi. Each decelerando/accelerando lasted 4 s.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Overview of experimental sound stimuli</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>No</bold>
.</th>
<th align="left" rowspan="1" colspan="1">
<bold>Frequency (note name)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Amplitude</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Tempo</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">Constant (D4)</td>
<td align="left" rowspan="1" colspan="1">Constant</td>
<td align="left" rowspan="1" colspan="1">Not applicable</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">Constant (D4)</td>
<td align="left" rowspan="1" colspan="1">Decreasing–Increasing</td>
<td align="left" rowspan="1" colspan="1">Not applicable</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">Constant (D4)</td>
<td align="left" rowspan="1" colspan="1">Increasing–Decreasing</td>
<td align="left" rowspan="1" colspan="1">Not applicable</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">Rising–Falling (B2–D4–B2)</td>
<td align="left" rowspan="1" colspan="1">Constant</td>
<td align="left" rowspan="1" colspan="1">Equal</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">Rising–Falling (B2–D4–B2)</td>
<td align="left" rowspan="1" colspan="1">Constant</td>
<td align="left" rowspan="1" colspan="1">Decelerando–Decelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">Rising–Falling (B2–D4–B2)</td>
<td align="left" rowspan="1" colspan="1">Constant</td>
<td align="left" rowspan="1" colspan="1">Accelerando–Accelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">Rising–Falling (B2–D4–B2)</td>
<td align="left" rowspan="1" colspan="1">Decreasing–Increasing</td>
<td align="left" rowspan="1" colspan="1">Equal</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">Rising–Falling (B2–D4–B2)</td>
<td align="left" rowspan="1" colspan="1">Decreasing–Increasing</td>
<td align="left" rowspan="1" colspan="1">Decelerando–Decelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">9</td>
<td align="left" rowspan="1" colspan="1">Rising–Falling (B2–D4–B2)</td>
<td align="left" rowspan="1" colspan="1">Decreasing–Increasing</td>
<td align="left" rowspan="1" colspan="1">Accelerando–Accelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">Rising–Falling (B2–D4–B2)</td>
<td align="left" rowspan="1" colspan="1">Increasing–Decreasing</td>
<td align="left" rowspan="1" colspan="1">Equal</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">11</td>
<td align="left" rowspan="1" colspan="1">Rising–Falling (B2–D4–B2)</td>
<td align="left" rowspan="1" colspan="1">Increasing–Decreasing</td>
<td align="left" rowspan="1" colspan="1">Decelerando–Decelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="left" rowspan="1" colspan="1">Rising–Falling (B2–D4–B2)</td>
<td align="left" rowspan="1" colspan="1">Increasing–Decreasing</td>
<td align="left" rowspan="1" colspan="1">Accelerando–Accelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">13</td>
<td align="left" rowspan="1" colspan="1">Falling–Rising (D4–B2–D4)</td>
<td align="left" rowspan="1" colspan="1">Constant</td>
<td align="left" rowspan="1" colspan="1">Equal</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">14</td>
<td align="left" rowspan="1" colspan="1">Falling–Rising (D4–B2–D4)</td>
<td align="left" rowspan="1" colspan="1">Constant</td>
<td align="left" rowspan="1" colspan="1">Decelerando–Decelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">15</td>
<td align="left" rowspan="1" colspan="1">Falling–Rising (D4–B2–D4)</td>
<td align="left" rowspan="1" colspan="1">Constant</td>
<td align="left" rowspan="1" colspan="1">Accelerando–Accelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">16</td>
<td align="left" rowspan="1" colspan="1">Falling–Rising (D4–B2–D4)</td>
<td align="left" rowspan="1" colspan="1">Decreasing–Increasing</td>
<td align="left" rowspan="1" colspan="1">Equal</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">17</td>
<td align="left" rowspan="1" colspan="1">Falling–Rising (D4–B2–D4)</td>
<td align="left" rowspan="1" colspan="1">Decreasing–Increasing</td>
<td align="left" rowspan="1" colspan="1">Decelerando–Decelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="left" rowspan="1" colspan="1">Falling–Rising (D4–B2–D4)</td>
<td align="left" rowspan="1" colspan="1">Decreasing–Increasing</td>
<td align="left" rowspan="1" colspan="1">Accelerando–Accelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">19</td>
<td align="left" rowspan="1" colspan="1">Falling–Rising (D4–B2–D4)</td>
<td align="left" rowspan="1" colspan="1">Increasing–Decreasing</td>
<td align="left" rowspan="1" colspan="1">Equal</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="left" rowspan="1" colspan="1">Falling–Rising (D4–B2–D4)</td>
<td align="left" rowspan="1" colspan="1">Increasing–Decreasing</td>
<td align="left" rowspan="1" colspan="1">Decelerando–Decelerando</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">21</td>
<td align="left" rowspan="1" colspan="1">Falling–Rising (D4–B2–D4)</td>
<td align="left" rowspan="1" colspan="1">Increasing–Decreasing</td>
<td align="left" rowspan="1" colspan="1">Accelerando–Accelerando</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Overview of frequency and amplitude contours of experimental sound stimuli</bold>
. All x-axes represent time (length of stimuli: 8 s). Highest/lowest frequency: 123.47/293.67 Hz. Equal amplitude means 50% of the maximum, decreasing amplitude means 90–10% of the maximum and increasing amplitude means 10–90% of the maximum. Freq, log frequency (Hz); Amp, amplitude.</p>
</caption>
<graphic xlink:href="fpsyg-05-00789-g0001"></graphic>
</fig>
</sec>
<sec>
<title>Motion capture</title>
<p>A Microsoft
<sup>®</sup>
Kinect™ was used to capture participants' hand movements. Further technical details (e.g., spatial resolution) can be found at
<ext-link ext-link-type="uri" xlink:href="http://openkinect.org/wiki/Imaging_Information">http://openkinect.org/wiki/Imaging_Information</ext-link>
. The bespoke software for the purposes of this experiment was developed in Processing v1.2.1 (Fry and Reas,
<xref rid="B24" ref-type="bibr">2011</xref>
). The whole experimental session was recorded with two video cameras (Panasonic HDC-SD 700/800). Participants also held a Nintendo
<sup>®</sup>
Wii™ Remote Controller in the same hand that was performing the gestures. If the latter was shaken strongly enough for the acceleration threshold of 10 m/s
<sup>2</sup>
to be exceeded, the fast shaking hand movements were recorded by the software (see Section Data Analysis).</p>
</sec>
<sec>
<title>Procedure</title>
<p>After signing the consent form, participants read detailed instructions and any remaining uncertainties were discussed with the experimenter. Participants were introduced to the Kinect™ and Wii™ Remote Controller technologies, made aware of the experimental space, and familiarized with the noise-canceling headphones to be worn during the experiment (Bose QuietComfort
<sup>®</sup>
15 Acoustic Noise Canceling
<sup>®</sup>
). The volume of the stimuli was set at a comfortable level by the experimenter and kept constant for all participants. The participants' task was to represent the sound stimuli with their right hand in which they held the Wii™ Remote Controller; it was stressed that (a) there were no “right” or “wrong” responses, (b) participants' responses should be consistent such that, if the same sound occurred twice, they should make the same movement, and (c) they should try to represent gesturally all sound characteristics they are able to identify.</p>
<p>The whole experiment consisted of four parts, including musical excerpts and a real-time visualization on a screen in front of the participants (Küssner,
<xref rid="B34" ref-type="bibr">2014</xref>
). Here, we will only report the results of the first part, in which participants gestured in response to pure tones without seeing a visualization in front of them. After a short calibration procedure with the Kinect™ to identify and track the participants' right hand, a summary of the instructions appeared on the screen. Once participants were ready, they informed the experimenter, who was seated behind another screen and was not able to see their movements, and the first block—practice trials consisting of five pure tones (see Supplementary Material)—was started. If participants did not have any further questions after the practice trials (they could repeat the practice trials as often as they wished), the second block consisting of all 21 pure tones was started. The presentation order of stimuli within the blocks was randomized. Participants were presented with each stimulus twice consecutively. The first time, they were supposed to listen only: 2 s prior to the stimulus onset the instruction “Get ready to LISTEN. X stimuli left. [countdown]” appeared in the upper left corner of the screen, informing participants about the number of stimuli left in this block (X) and starting a short countdown. The second time, participants were supposed to represent the sound stimulus gesturally while it was played. The instruction “Get ready to GESTURE. [countdown]” appeared and participants were again prepared for the onset of the stimulus with a countdown. This procedure had been approved by the College Research Ethics Committee (REP-H/10/11-13).</p>
</sec>
<sec>
<title>Data analysis</title>
<p>The sound features—frequency in Hz and estimated loudness in sone, sampled at 20 Hz each—were extracted with Praat version 5.3.15 (Boersma and Weenink,
<xref rid="B4" ref-type="bibr">2012</xref>
). Frequency values were log-transformed to account for human perception of pitch, as is common practice in psychophysical experiments (e.g., see Micheyl et al.,
<xref rid="B47" ref-type="bibr">2006</xref>
). Both log-transformed frequency values and loudness values were then standardized (
<italic>M</italic>
= 0,
<italic>SD</italic>
= 1) per sound stimulus. The Kinect™ data—X, Y and Z coordinates sampled at ca. 15 Hz [mean frame length was 66.24 ms (
<italic>SD</italic>
= 4.95 ms) and median 68 ms]—were extracted together with their timestamps. All three spatial coordinates were then standardized (
<italic>M</italic>
= 0,
<italic>SD</italic>
= 1) per sound stimulus. Next, sound features were linearly interpolated to realign them with the movement data at the timestamps of the Kinect™ data, creating a matrix with six columns (timestamp, frequency, loudness, X, Y, Z) per stimulus.</p>
<p>As an indicator of the degree of the association between sound and movement features Spearman's rho—a non-parametric correlation coefficient—was calculated. This measure has been suggested for time-dependent data by Schubert (
<xref rid="B62" ref-type="bibr">2002</xref>
), and has been used by various scholars for similar datasets (e.g., Vines et al.,
<xref rid="B76" ref-type="bibr">2006</xref>
; Nymoen et al.,
<xref rid="B55" ref-type="bibr">2013</xref>
; Küssner and Leech-Wilkinson,
<xref rid="B35" ref-type="bibr">2014</xref>
). It has been argued that one needs to be cautious when interpreting the size of correlation coefficients derived from time-dependent data. For Spearman's rho, this is even more straightforward: regardless of time-dependence, the absolute size of this coefficient is never interpretable because its variance is not defined. Though the significance of a single Spearman's rho correlation coefficient derived from a time-dependent dataset might not be meaningful, it can be valuable to compare several correlation coefficients.</p>
<p>For the purpose of this analysis, global and local Spearman's correlation coefficients were computed. The number of data points for a local correlation was
<italic>N</italic>
= 119, and for a global correlation
<italic>N</italic>
= 2142. Only sound stimuli Nos. 4–21 were entered into the analysis (unless stated otherwise) since stimuli Nos. 1–3 contain constant features which cannot be entered into a correlation analysis. Note that the loudness was only genuinely equal in stimulus No. 1. Due to the equal-loudness-level contours for pure tones (Suzuki and Takeshima,
<xref rid="B69" ref-type="bibr">2004</xref>
) and the use of loudness measured in sone, stimuli Nos. 4–6 and 13–15, whose amplitude was constant, could be entered into the analysis because their perceived loudness varied marginally according to the pitch contour.</p>
<p>“Global” denotes the correlation between sound features of all stimuli of a single participant and their accompanying hand movements (e.g., global frequency–Y correlation coefficient of participant
<italic>k</italic>
). “Local” denotes the correlation between sound features of a particular stimulus of a single participant and their accompanying hand movements (e.g., local frequency–Y correlation coefficient of sound stimulus
<italic>s</italic>
of participant
<italic>k</italic>
).</p>
<p>The following analytical steps were applied to investigate gestural representations of pitch and loudness and carried out in IBM SPSS Statistics (Version 20). First, the absolute global correlation coefficients between frequency and loudness and, respectively, the three spatial axes X, Y and Z, were entered into two ANOVAs with the within-subjects factor “space” (X/Y/Z) to identify the three strongest correlations between each variable and each of the three axes (e.g., frequency and Y), which was then examined further in the subsequent steps of the analysis. Secondly, the original (rather than absolute) global correlation coefficients were examined to identify the direction of movement. Thirdly, the effects of interactions between musical parameters (pitch contour, loudness contour and tempo) on the size of the correlations were investigated by means of local correlation coefficients, resulting in ANOVAs with the between-subjects factor “training” (musically trained/musically untrained) and the within-subjects factors “pitch” (rising–falling/falling–rising), “loudness” (constant amplitude/decreasing–increasing/increasing–decreasing) and “tempo” (equal/decelerando–decelerando/accelerando–accelerando). All
<italic>post-hoc</italic>
pairwise comparisons were Sidak-corrected. Fourthly, to investigate whether muscular energy of the hand was associated with loudness and tempo variations in the stimuli, data from the Wii™ Remote Controller were collected when the difference in acceleration between the current and previous frame exceeded 10 m/s
<sup>2</sup>
. That is, when participants shook the Controller (henceforth “shaking event”) strongly enough, the software recorded a shaking event with a timestamp. Fifthly, to investigate whether the speed of the hand movement was associated with tempo variations in the stimuli, the mean velocity in response to each quarter of a sound stimulus was the dependent variable of an ANOVA with the within-subjects factors “half” (1st/2nd half of a stimulus), “quarter” (1st/2nd quarter of each half), “pitch” (up/down), “loudness” (constant amplitude/decreasing/increasing) and “tempo” (equal/decelerando/accelerando), and the between-subjects factor “training.” Whenever the assumption of sphericity was violated in repeated-measures ANOVAs, the degrees of freedom were adjusted using the Greenhouse-Geisser correction. Any follow-up
<italic>t</italic>
-tests were Bonferroni-corrected.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Pitch</title>
<sec>
<title>Absolute global correlation analysis</title>
<p>There was a main effect of “space” [
<italic>F</italic>
<sub>(1.75,110.47)</sub>
= 192.87,
<italic>p</italic>
< 0.001, partial η
<sup>2</sup>
= 0.75], and all three Sidak-corrected pairwise comparisons revealed significant differences (all
<italic>p</italic>
< 0.001). Correlations with Y (
<italic>M</italic>
= 0.68, s.e.m. = 0.03) were greater than with X (
<italic>M</italic>
= 0.14, s.e.m. = 0.02) and with Z (
<italic>M</italic>
= 0.25, s.e.m. = 0.02), and correlations with Z were greater than with X. All 64 participants showed positive correlation coefficients, suggesting that they moved their hand upwards with increasing pitch and downwards with decreasing pitch. We thus shift our focus to the analysis of local correlations of frequency–Y.</p>
</sec>
<sec>
<title>Interactions between musical parameters—local correlations of frequency–Y</title>
<p>Results revealed main effects of “training” [
<italic>F</italic>
<sub>(1, 62)</sub>
= 18.64,
<italic>p</italic>
< 0.001, partial η
<sup>2</sup>
= 0.23], “pitch” [
<italic>F</italic>
<sub>(1, 62)</sub>
= 5.04,
<italic>p</italic>
= 0.028, partial η
<sup>2</sup>
= 0.08], “loudness” [
<italic>F</italic>
<sub>(2, 124)</sub>
= 3.44,
<italic>p</italic>
= 0.035, partial η
<sup>2</sup>
= 0.05], and “tempo” [
<italic>F</italic>
<sub>(1.55,96.34)</sub>
= 8.29,
<italic>p</italic>
= 0.001, partial η
<sup>2</sup>
= 0.12]. The positive association between pitch and height was larger for musically trained (
<italic>M</italic>
= 0.79, s.e.m. = 0.03) compared to untrained participants (
<italic>M</italic>
= 0.58, s.e.m. = 0.03); rising–falling pitch contours (
<italic>M</italic>
= 0.71, s.e.m. = 0.02) led to higher frequency-Y correlation coefficients than falling–rising pitch contours (
<italic>M</italic>
= 0.65, s.e.m. = 0.03); constant amplitude (
<italic>M</italic>
= 0.71, s.e.m. = 0.02) gave rise to higher frequency-Y correlation coefficients than decreasing–increasing loudness contours (
<italic>M</italic>
= 0.67, s.e.m. = 0.03,
<italic>p</italic>
= 0.042); and equal tempo (
<italic>M</italic>
= 0.73, s.e.m. = 0.03) compared to both “accelerando–accelerando” (
<italic>M</italic>
= 0.64, s.e.m. = 0.03,
<italic>p</italic>
= 0.004) and “decelerando–decelerando” (
<italic>M</italic>
= 0.68, s.e.m. = 0.03,
<italic>p</italic>
= 0.018) resulted in higher frequency–Y correlation coefficients. Primary response data—gestural trajectories along the y-axis in response to sound stimulus No. 4 (rising–falling pitch)—are shown for a subsample of 16 randomly chosen musically trained participants (Figure
<xref ref-type="fig" rid="F2">2</xref>
left) and 16 randomly chosen musically untrained participants (Figure
<xref ref-type="fig" rid="F2">2</xref>
right).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Gestural trajectories along the y-axis in response to sound stimulus rising and falling in pitch (No. 4) by a subsample of 16 randomly chosen musically trained participants (left) and 16 randomly chosen musically untrained participants (right)</bold>
.</p>
</caption>
<graphic xlink:href="fpsyg-05-00789-g0002"></graphic>
</fig>
<p>Several two- and three-way interactions were observed. There was a significant interaction effect between “pitch” and “training” [
<italic>F</italic>
<sub>(1, 62)</sub>
= 4.12,
<italic>p</italic>
= 0.047, partial η
<sup>2</sup>
= 0.06], revealing that the observed main effect of “pitch” is chiefly due to musically untrained participants' lower frequency–Y correlation coefficients when presented with falling–rising pitch contours (
<italic>M</italic>
= 0.52, s.e.m. = 0.04) compared to rising–falling pitch contours (
<italic>M</italic>
= 0.64, s.e.m. = 0.03),
<italic>t</italic>
<sub>(31)</sub>
= 2.32,
<italic>p</italic>
= 0.027,
<italic>r</italic>
= 0.38. In comparison, musically trained participants' frequency–Y correlation coefficients did not differ significantly [rising–falling pitch contours:
<italic>M</italic>
= 0.79, s.e.m. = 0.03; falling–rising pitch contours:
<italic>M</italic>
= 0.78, s.e.m. = 0.04;
<italic>t</italic>
<sub>(31)</sub>
= 0.28,
<italic>p</italic>
= 0.781,
<italic>r</italic>
= 0.05], see Figure
<xref ref-type="fig" rid="F3">3</xref>
.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Influence of interaction between musical training and pitch contour on local frequency–Y correlations</bold>
.
<sup>*</sup>
indicates
<italic>p</italic>
< 0.05, ns, not significant.</p>
</caption>
<graphic xlink:href="fpsyg-05-00789-g0003"></graphic>
</fig>
<p>There were also significant interaction effects between “tempo” and “training” [
<italic>F</italic>
<sub>(2, 124)</sub>
= 10.10,
<italic>p</italic>
< 0.001, partial η
<sup>2</sup>
= 0.14], between “pitch” and “tempo” [
<italic>F</italic>
<sub>(1.46,90.44)</sub>
= 20.00,
<italic>p</italic>
< 0.001, partial η
<sup>2</sup>
= 0.24], between “pitch” and “loudness” [
<italic>F</italic>
<sub>(1.73,107.24)</sub>
= 5.30,
<italic>p</italic>
= 0.009, partial η
<sup>2</sup>
= 0.08], between “pitch,” “tempo” and “loudness” [
<italic>F</italic>
<sub>(4, 248)</sub>
= 3.85,
<italic>p</italic>
= 0.005, partial η
<sup>2</sup>
= 0.06], and between “pitch,” “tempo” and “training” [
<italic>F</italic>
<sub>(1.46,90.44)</sub>
= 4.45,
<italic>p</italic>
= 0.024, partial η
<sup>2</sup>
= 0.07]. Running nine follow-up
<italic>t</italic>
-tests (alpha level: 0.0056) to compare frequency–Y correlation coefficients of rising–falling and falling–rising pitch contours across different loudness and tempo profiles, two significant effects were found. Together with equal amplitude and “accelerando–accelerando” pattern, the rising–falling pitch contour (
<italic>M</italic>
= 0.76, s.e.m. = 0.04) led to higher frequency–Y correlation coefficients than the falling–rising pitch contour [
<italic>M</italic>
= 0.59, s.e.m. = 0.05;
<italic>t</italic>
<sub>(63)</sub>
= 3.05,
<italic>p</italic>
= 0.003,
<italic>r</italic>
= 0.36]. Similarly, together with increasing–decreasing amplitude and “accelerando–accelerando” pattern, the rising–falling pitch contour (
<italic>M</italic>
= 0.81, s.e.m. = 0.03) gave rise to higher frequency–Y correlation coefficients than the falling–rising pitch contour [
<italic>M</italic>
= 0.46, s.e.m. = 0.06;
<italic>t</italic>
<sub>(63)</sub>
= 6.21,
<italic>p</italic>
< 0.001,
<italic>r</italic>
= 0.62]. Although not significant [
<italic>t</italic>
<sub>(63)</sub>
= 2.75,
<italic>p</italic>
= 0.008,
<italic>r</italic>
= 0.33], the same trend was observed for stimuli with decreasing–increasing amplitude and “accelerando–accelerando” pattern.</p>
<p>Further support for the observation that “accelerando–accelerando” patterns increase the difference of frequency–Y correlation coefficients between rising–falling and falling–rising pitch contours is provided by breaking down the interaction between “pitch,” “tempo” and “training.” It was revealed that both musically trained [
<italic>t</italic>
<sub>(31)</sub>
= 2.82,
<italic>p</italic>
= 0.008,
<italic>r</italic>
= 0.45] and musically untrained participants [
<italic>t</italic>
<sub>(31)</sub>
= 5.10,
<italic>p</italic>
< 0.001,
<italic>r</italic>
= 0.68] showed higher frequency–Y correlation coefficients when “accelerando–accelerando” patterns were paired with rising–falling (trained:
<italic>M</italic>
= 0.84, s.e.m. = 0.02; untrained:
<italic>M</italic>
= 0.66, s.e.m. = 0.05) compared to falling–rising pitch contours (trained:
<italic>M</italic>
= 0.75, s.e.m. = 0.04; untrained:
<italic>M</italic>
= 0.31, s.e.m. = 0.06).</p>
</sec>
</sec>
<sec>
<title>Loudness</title>
<sec>
<title>Absolute global correlation analysis</title>
<p>Results revealed a main effect of “space” [
<italic>F</italic>
<sub>(2, 126)</sub>
= 108.49,
<italic>p</italic>
< 0.001, partial η
<sup>2</sup>
= 0.63]. Whereas correlations with Y (
<italic>M</italic>
= 0.33, s.e.m. = 0.01) were greater than with X (
<italic>M</italic>
= 0.10, s.e.m. = 0.01) and with Z (
<italic>M</italic>
= 0.14, s.e.m. = 0.01; both
<italic>p</italic>
< 0.001), correlations with Z did not differ from correlations with X (
<italic>p</italic>
= 0.170). Apart from one musically untrained participant (ρ = −0.001), all participants showed positive correlations between loudness and height, suggesting that they moved their arm upwards with increasing loudness and downwards with decreasing loudness. The question arises, however, whether participants indeed chose to represent loudness with the y-axis, or whether this is a spurious effect, caused by interactions between pitch and loudness in the stimuli. Recall that stimuli Nos. 10–12 and 16–18 consist, respectively, of concurrently increasing–decreasing and decreasing–increasing pitch and loudness contours, whereas stimuli Nos. 7–9 and 19–21 consist of opposing pitch and loudness contours (see Figure
<xref ref-type="fig" rid="F1">1</xref>
). Thus, it is vital to consider the local correlations to identify whether the positive loudness–Y correlations values are in fact a side effect of frequency–Y correlation coefficients. If so, there should be a significant interaction effect between “pitch” and “loudness,” resulting in negative loudness–Y correlations for stimuli when the pitch contour is rising–falling (falling–rising) and the loudness contour is concurrently decreasing–increasing (increasing–decreasing).</p>
</sec>
<sec>
<title>Interactions between musical parameters—local correlations of loudness–Y</title>
<p>Although there are main effects of “training,” “loudness” and “tempo,” as well as two-way interactions between “loudness” and “training,” “tempo” and “training,” “pitch” and “tempo,” and “loudness” and “tempo,” the main focus here is on a highly significant interaction between “pitch” and “loudness” [
<italic>F</italic>
<sub>(1.46,90.60)</sub>
= 481.88,
<italic>p</italic>
< 0.001, partial η
<sup>2</sup>
= 0.89]. Inspecting the interaction graph (see Figure
<xref ref-type="fig" rid="F4">4</xref>
), it becomes obvious that participants map pitch, not loudness, onto the y-axis. When rising–falling pitch contours are paired with decreasing–increasing loudness contours the loudness–Y correlation coefficients are negative (
<italic>M</italic>
= −0.45, s.e.m. = 0.03), and when paired with increasing–decreasing loudness contours they are positive (
<italic>M</italic>
= 0.71, s.e.m. = 0.03). Similarly, when falling–rising pitch contours are paired with increasing–decreasing loudness contours the loudness–Y correlation coefficients are negative (
<italic>M</italic>
= −0.36, s.e.m. = 0.04), and when paired with decreasing–increasing loudness contours they are positive (
<italic>M</italic>
= 0.65, s.e.m. = 0.04). Also the slight decrease of loudness–Y correlation coefficients from rising–falling (
<italic>M</italic>
= 0.75, s.e.m. = 0.03) to falling–rising pitch contours (
<italic>M</italic>
= 0.67, s.e.m. = 0.03) when the amplitude is equal fits into the picture, as it reflects the main effect of “pitch” for frequency–Y correlation coefficients.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Spurious loudness vs. height association: influence of interaction between pitch and loudness contour on local loudness–Y correlations</bold>
.</p>
</caption>
<graphic xlink:href="fpsyg-05-00789-g0004"></graphic>
</fig>
<p>Due to the clear results obtained from the interaction between “pitch” and “loudness,” the focus is shifted to stimuli without change in pitch to investigate whether there exist associations between loudness and height when loudness is the only auditory feature being manipulated.</p>
<p>Running a repeated-measures ANOVA on loudness–Y correlation coefficients of stimuli Nos. 2 and 3 with the within-subjects factor “loudness” and the between-subjects factor “training,” a main effect of “loudness” was observed,
<italic>F</italic>
<sub>(1, 62)</sub>
= 4.86,
<italic>p</italic>
= 0.031, partial η
<sup>2</sup>
= 0.07. The increasing–decreasing loudness contour (
<italic>M</italic>
= 0.36, s.e.m. = 0.05) gave rise to higher loudness–Y correlation coefficients compared to the decreasing–increasing loudness contour (
<italic>M</italic>
= 0.17, s.e.m. = 0.07).</p>
</sec>
<sec>
<title>Local correlations of loudness–Z for stimuli without change in pitch</title>
<p>Since the association between loudness and the z-axis for stimuli concurrently varied in pitch, loudness and tempo was too small to be interpreted meaningfully (mean absolute ρ = 0.14; see Section Absolute global correlation analysis), the focus is shifted to stimuli without change in pitch to investigate whether there was any association between loudness and distance from the body when loudness was the only auditory feature being manipulated. Results revealed main effects of “training” [
<italic>F</italic>
<sub>(1, 62)</sub>
= 6.86,
<italic>p</italic>
= 0.011, partial η
<sup>2</sup>
= 0.10] and “loudness” [
<italic>F</italic>
<sub>(1, 62)</sub>
= 23.65,
<italic>p</italic>
< 0.001, partial η
<sup>2</sup>
= 0.28]. Loudness–Z correlation coefficients were significantly larger for musically trained (
<italic>M</italic>
= 0.33, s.e.m. = 0.07) compared to untrained participants (
<italic>M</italic>
= 0.09, s.e.m. = 0.07), and significantly larger when the loudness was increasing–decreasing (
<italic>M</italic>
= 0.43, s.e.m. = 0.06) compared to decreasing–increasing (
<italic>M</italic>
= −0.01, s.e.m. = 0.07). This suggests that only musically trained participants associated loudness with the z-axis, and only if the loudness contour was increasing–decreasing.</p>
</sec>
<sec>
<title>Association between muscular energy (shaking events) and loudness</title>
<p>The question arises whether participants did represent loudness at all when musical parameters were varied concurrently, since we have just shown that participants used neither height (representation of pitch takes precedence) nor distance (correlation coefficients too low). According to our hypotheses we expected an association between loudness and muscular energy (operationalized as shaking hand movements). An ANOVA was run to investigate whether the number of shaking events significantly changed with increasing or decreasing loudness. Results revealed a main effect of “loudness” [
<italic>F</italic>
<sub>(1, 62)</sub>
= 11.02,
<italic>p</italic>
= 0.002, partial η
<sup>2</sup>
= 0.15], indicating that the number of shaking events increased when the loudness was increasing (
<italic>M</italic>
= 30.83, s.e.m. = 9.24) and decreased when the loudness was decreasing (
<italic>M</italic>
= 16.72, s.e.m. = 5.57). The large variation in the data suggests that there were substantial inter-individual differences present.</p>
</sec>
</sec>
<sec>
<title>How pitch, loudness, tempo and interactions thereof influence the speed of hand movement when representing sound gesturally</title>
<p>Muscular energy was also hypothesized to be associated with tempo. Results of an ANOVA investigating whether the number of shaking events significantly changed with increasing or decreasing tempo were non-significant [
<italic>F</italic>
<sub>(1, 62)</sub>
= 1.53,
<italic>p</italic>
= 0.220, partial η
<sup>2</sup>
= 0.02]. Thus, our hypothesis pertaining to muscular energy and tempo is rejected and the focus is shifted to speed of hand movement.</p>
<p>Only interaction effects of the ANOVA involving at least the factors “quarter” and either “pitch,” “loudness” or “tempo” will be reported here since the aim is to analyse how changes of speed across either half of a sound stimulus are affected by changes in pitch, loudness and tempo. There was an interaction between “quarter” and “tempo” [
<italic>F</italic>
<sub>(2, 124)</sub>
= 51.10,
<italic>p</italic>
< 0.001, partial η
<sup>2</sup>
= 0.45], indicating that when tempo was equal or decreasing across two quarters, speed of hand movement was decreasing [equal:
<italic>t</italic>
<sub>(63)</sub>
= 5.73,
<italic>p</italic>
< 0.001,
<italic>r</italic>
= 0.59; decelerando:
<italic>t</italic>
<sub>(63)</sub>
= 9.17,
<italic>p</italic>
< 0.001,
<italic>r</italic>
= 0.76]. However, when tempo increased across two quarters, there was no significant change in speed of hand movement [
<italic>t</italic>
<sub>(63)</sub>
= −1.56,
<italic>p</italic>
= 0.123,
<italic>r</italic>
= 0.19].</p>
<p>Three- and four-way interaction effects further qualified the interaction between “quarter” and “tempo.” There was a significant interaction between “quarter,” “tempo” and “training” [
<italic>F</italic>
<sub>(2, 124)</sub>
= 8.58,
<italic>p</italic>
< 0.001, partial η
<sup>2</sup>
= 0.12], revealing that the non-significant result for the accelerando pattern is due to musically untrained participants' lack of increase in speed. While musically trained participants' increase in speed across two quarters is significant when tempo is accelerating [
<italic>t</italic>
<sub>(31)</sub>
= −2.54,
<italic>p</italic>
= 0.016,
<italic>r</italic>
= 0.42], there is no difference for untrained participants [
<italic>t</italic>
<sub>(31)</sub>
= 0.68,
<italic>p</italic>
= 0.50,
<italic>r</italic>
= 0.12], as shown in Table
<xref ref-type="table" rid="T2">2</xref>
.</p>
<table-wrap id="T2" position="float">
<label>Table 2</label>
<caption>
<p>
<bold>Mean speed of hand movement for the interaction quarter × tempo × training</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Musical training</bold>
</th>
<th align="center" colspan="6" rowspan="1">
<bold>Tempo</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" colspan="2" rowspan="1">
<bold>Equal</bold>
</td>
<td align="center" colspan="2" rowspan="1">
<bold>Decelerando</bold>
</td>
<td align="center" colspan="2" rowspan="1">
<bold>Accelerando</bold>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">
<bold>1st</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>2nd</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>1st</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>2nd</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>1st</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>2nd</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Trained</td>
<td align="center" rowspan="1" colspan="1">0.28 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.23 (0.01)</td>
<td align="center" rowspan="1" colspan="1">0.30 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.21 (0.01)</td>
<td align="center" rowspan="1" colspan="1">0.23 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.25 (0.02)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Untrained</td>
<td align="center" rowspan="1" colspan="1">0.27 (0.03)</td>
<td align="center" rowspan="1" colspan="1">0.24 (0.03)</td>
<td align="center" rowspan="1" colspan="1">0.29 (0.03)</td>
<td align="center" rowspan="1" colspan="1">0.24 (0.03)</td>
<td align="center" rowspan="1" colspan="1">0.24 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.24 (0.02)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>Values in brackets are standard errors of the mean. 1st and 2nd refer to the averaged speed of quarters 1 and 3 and quarters 2 and 4, respectively</italic>
.</p>
</table-wrap-foot>
</table-wrap>
<p>There was also an interaction between “half,” “quarter,” “pitch” and “tempo” [
<italic>F</italic>
<sub>(2, 124)</sub>
= 3.16,
<italic>p</italic>
= 0.046, partial η
<sup>2</sup>
= 0.05] which was broken down by running one ANOVA for each half. Whereas the first half revealed no significant interaction [
<italic>F</italic>
<sub>(2, 124)</sub>
= 1.40,
<italic>p</italic>
= 0.252], the second half showed a significant interaction between “quarter,” “pitch” and “tempo” [
<italic>F</italic>
<sub>(2, 124)</sub>
= 3.54,
<italic>p</italic>
= 0.032, partial η
<sup>2</sup>
= 0.05]. Comparing the speed of hand movement across the second half with follow-up
<italic>t</italic>
-tests (alpha level: 0.0083), it was revealed that (a) stimuli with equal tempo led to decrease in speed when pitch was rising [
<italic>t</italic>
<sub>(63)</sub>
= 3.46,
<italic>p</italic>
= 0.001,
<italic>r</italic>
= 0.40], but not when pitch was falling [
<italic>t</italic>
<sub>(63)</sub>
= 2.03,
<italic>p</italic>
= 0.046,
<italic>r</italic>
= 0.25], (b) stimuli with decreasing tempo led to decrease in speed when pitch was falling [
<italic>t</italic>
<sub>(63)</sub>
= 6.44,
<italic>p</italic>
< 0.001,
<italic>r</italic>
= 0.63] and when pitch was rising [
<italic>t</italic>
<sub>(63)</sub>
= 5.46,
<italic>p</italic>
< 0.001,
<italic>r</italic>
= 0.57], and (c) stimuli with increasing tempo only led to increase in speed when pitch was rising [
<italic>t</italic>
<sub>(63)</sub>
= −3.20,
<italic>p</italic>
= 0.002,
<italic>r</italic>
= 0.37], but not when pitch was falling [
<italic>t</italic>
<sub>(63)</sub>
= −0.57,
<italic>p</italic>
= 0.572,
<italic>r</italic>
= 0.07], as shown in Table
<xref ref-type="table" rid="T3">3</xref>
.</p>
<table-wrap id="T3" position="float">
<label>Table 3</label>
<caption>
<p>
<bold>Mean speed of hand movement for the interaction quarter × pitch × tempo (second half of stimuli)</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Pitch</bold>
</th>
<th align="center" colspan="6" rowspan="1">
<bold>Tempo</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" colspan="2" rowspan="1">
<bold>Equal</bold>
</td>
<td align="center" colspan="2" rowspan="1">
<bold>Decelerando</bold>
</td>
<td align="center" colspan="2" rowspan="1">
<bold>Accelerando</bold>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">
<bold>1st</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>2nd</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>1st</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>2nd</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>1st</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>2nd</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Rising</td>
<td align="center" rowspan="1" colspan="1">0.29 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.24 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.30 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.22 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.24 (0.01)</td>
<td align="center" rowspan="1" colspan="1">0.26 (0.02)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Falling</td>
<td align="center" rowspan="1" colspan="1">0.26 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.24 (0.01)</td>
<td align="center" rowspan="1" colspan="1">0.29 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.21 (0.01)</td>
<td align="center" rowspan="1" colspan="1">0.24 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.24 (0.01)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>Values in brackets are standard errors of the mean. 1st and 2nd refer to the averaged speed of quarters 3 and 4, respectively</italic>
.</p>
</table-wrap-foot>
</table-wrap>
<p>There was a significant interaction between “half,” “quarter” and “loudness” [
<italic>F</italic>
<sub>(1.74,107.96)</sub>
= 7.73,
<italic>p</italic>
= 0.001, partial η
<sup>2</sup>
= 0.11]. For the first half, there was a significant interaction between “quarter” and “loudness” [
<italic>F</italic>
<sub>(2, 124)</sub>
= 6.68,
<italic>p</italic>
= 0.002, partial η
<sup>2</sup>
= 0.10], revealing that the speed of hand movement across the first half decreased when the amplitude was equal [
<italic>t</italic>
<sub>(63)</sub>
= 4.32,
<italic>p</italic>
< 0.001,
<italic>r</italic>
= 0.48] and when the amplitude was decreasing [
<italic>t</italic>
<sub>(63)</sub>
= 7.01,
<italic>p</italic>
< 0.001,
<italic>r</italic>
= 0.66]. No change in speed was observed when the amplitude was increasing [
<italic>t</italic>
<sub>(63)</sub>
= 2.34,
<italic>p</italic>
= 0.023,
<italic>r</italic>
= 0.28]. For the second half, there was also a significant interaction between “quarter” and “loudness” [
<italic>F</italic>
<sub>(1.56,96.62)</sub>
= 4.01,
<italic>p</italic>
= 0.030, partial η
<sup>2</sup>
= 0.06], confirming the pattern found in the first half: the speed of hand movement across the second half decreased when the amplitude was equal [
<italic>t</italic>
<sub>(63)</sub>
= 5.50,
<italic>p</italic>
< 0.001,
<italic>r</italic>
= 0.57] and when the amplitude was decreasing [
<italic>t</italic>
<sub>(63)</sub>
= 4.86,
<italic>p</italic>
< 0.001,
<italic>r</italic>
= 0.52). No change in speed was observed when the amplitude was increasing (
<italic>t</italic>
(63) = 2.03,
<italic>p</italic>
= 0.046,
<italic>r</italic>
= 0.25]. An overview can be seen in Table
<xref ref-type="table" rid="T4">4</xref>
.</p>
<table-wrap id="T4" position="float">
<label>Table 4</label>
<caption>
<p>
<bold>Mean speed of hand movement for the interaction half × quarter × loudness</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Half</bold>
</th>
<th align="center" colspan="6" rowspan="1">
<bold>Amplitude</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" colspan="2" rowspan="1">
<bold>Equal</bold>
</td>
<td align="center" colspan="2" rowspan="1">
<bold>Decreasing</bold>
</td>
<td align="center" colspan="2" rowspan="1">
<bold>Increasing</bold>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">
<bold>1st</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>2nd</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>1st</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>2nd</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>1st</bold>
</td>
<td align="center" rowspan="1" colspan="1">
<bold>2nd</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">First</td>
<td align="center" rowspan="1" colspan="1">0.27 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.23 (0.01)</td>
<td align="center" rowspan="1" colspan="1">0.28 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.22 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.27 (0.01)</td>
<td align="center" rowspan="1" colspan="1">0.24 (0.02)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Second</td>
<td align="center" rowspan="1" colspan="1">0.26 (0.01)</td>
<td align="center" rowspan="1" colspan="1">0.23 (0.01)</td>
<td align="center" rowspan="1" colspan="1">0.28 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.23 (0.01)</td>
<td align="center" rowspan="1" colspan="1">0.27 (0.02)</td>
<td align="center" rowspan="1" colspan="1">0.25 (0.02)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>Values in brackets are standard errors of the mean. 1st and 2nd refer to the averaged speed of quarters 1 and 2 (1st half) and quarters 3 and 4 (2nd half), respectively</italic>
.</p>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<sec>
<title>Summary of main findings</title>
<p>Asking 64 participants to represent gesturally a set of pure tones, we analyzed their representations of pitch, loudness and tempo, taking into account interactions between musical parameters within the sound stimuli. Pitch was most strongly associated with the y-axis and loudness with the y-axis as well, though the latter finding turned out to be a spurious effect caused by concurrent changes of pitch and loudness. All participants showed positive correlation coefficients between pitch and height, and this association was larger for musically trained compared to untrained participants. Rising–falling pitch contours led to higher correlation coefficients than falling–rising pitch contours, which is mainly due to musically untrained participants' lower values when presented with the latter contour. This gap was increased, equally for trained and untrained participants, when the concurrent tempo pattern consisted of accelerandi, and regardless of the accompanying loudness patterns.</p>
<p>Notwithstanding the spurious loudness vs. height association for stimuli concurrently varied in pitch, loudness and tempo, those stimuli that only varied in loudness did reveal loudness vs. height associations: they were larger for increasing–decreasing compared to decreasing–increasing loudness contours, and musically trained participants showed higher values than untrained participants. The hypothesized association between loudness and z-axis was only found in stimuli that only varied in loudness, and only for musically trained participants when the loudness contour was increasing–decreasing. Muscular energy was found to be increasing (decreasing) when the loudness was increasing (decreasing), but showed no association with tempo.</p>
<p>Finally, speed of hand movement was associated with tempo and influenced by musical training (untrained participants did not increase speed of hand movement when tempo increased) and interactions with pitch (falling pitch prevented increase in speed when tempo increased) and loudness (increasing loudness prevented change in speed of hand movement).</p>
</sec>
<sec>
<title>Pitch</title>
<p>The strong association between pitch and height corroborates findings from previous studies applying a range of different paradigms such as motion imagery (Eitan and Granot,
<xref rid="B17" ref-type="bibr">2006</xref>
), drawings (Küssner and Leech-Wilkinson,
<xref rid="B35" ref-type="bibr">2014</xref>
), gestures (Nymoen et al.,
<xref rid="B55" ref-type="bibr">2013</xref>
) and forced choices (Walker,
<xref rid="B80" ref-type="bibr">1987</xref>
). Musically trained participants showing higher correlation coefficients than untrained participants is in line with previous studies, too (Walker,
<xref rid="B80" ref-type="bibr">1987</xref>
; Küssner and Leech-Wilkinson,
<xref rid="B35" ref-type="bibr">2014</xref>
), as is the finding that rising–falling pitch contours gave rise to higher correlation coefficients than falling–rising pitch contours (Kohn and Eitan,
<xref rid="B32" ref-type="bibr">2012</xref>
). However, we were able to show that the latter effect is heavily influenced by training, revealing that only untrained participants, but not trained participants, show more consistent associations for rising–falling pitch contours compared to falling–rising contours. What is more, this interaction was further mediated by the tempo pattern: Both musically trained and untrained participants showed higher values when pitch and tempo patterns were concurrently increasing in the first half of the stimuli (i.e., rising pitch and increase in tempo) and moving contrarily in the second half of the stimuli (i.e., falling pitch and increase in tempo) compared to when pitch and tempo patterns were moving contrarily in the first half of the stimuli (i.e., falling pitch and increase in tempo) and concurrently increasing in the second half of the stimuli (i.e., rising pitch and increase in tempo). There are at least three different factors interacting here. First, the gestural pitch vs. height representation of decreasing pitch paired with an increase in tempo is facilitated by the laws of gravity: an object falling toward the ground accelerates. Secondly, faster processing of congruent semantic correspondences such as increasing pitch and increasing tempo, which both represent increasing intensity, facilitates accelerated upward movements. The third factor needs more explanation. The type of the pitch contour (rising–falling vs. falling–rising) is evidently crucial for the resulting association between pitch and height. While the roles of natural laws and conceptual metaphors have been discussed before in the context of cross-modal mappings (Johnson and Larson,
<xref rid="B29" ref-type="bibr">2003</xref>
), the role of the pitch contour for embodied cross-modal mappings awaits further research. One mundane explanation could be the (lack of) effort to move the hand in a higher start position: it is simply more comfortable to wait for the beginning of a trial with the arm hanging loosely beside the body.</p>
</sec>
<sec>
<title>Loudness</title>
<p>The disclosure of the spurious loudness vs. height association in stimuli varied in several auditory features is perhaps not surprising for a musical culture largely based on pitch. When confronted with opposing pitch and loudness contours, participants chose to represent pitch, not loudness, on the y-axis. Importantly, this shows that pitch vs. height associations dominate loudness vs. height associations in a context of concurrently varied sound features, putting the results reported by Kohn and Eitan (
<xref rid="B31" ref-type="bibr">2009</xref>
)—that loudness vs. height associations of sound features varied in isolation are stronger than pitch vs. height associations—and the conclusion drawn by Eitan (
<xref rid="B15" ref-type="bibr">2013a</xref>
)—that the “hierarchy of musical parameters delineating musical space and motion may conflict with the parametric hierarchy assumed by many music theorists” (i.e., pitch and duration first, loudness secondary)—into perspective. Of course, this does not mean people do not display loudness vs. height mappings (Eitan et al.,
<xref rid="B20" ref-type="bibr">2008</xref>
). As shown for stimuli only varied in loudness (Nos. 2 and 3), there exists an association between loudness and the vertical axis, which is larger for increasing–decreasing than decreasing–increasing contours (see also Kohn and Eitan,
<xref rid="B32" ref-type="bibr">2012</xref>
) and larger for musically trained compared to untrained participants. But compared to other mappings such as pitch vs. height, this association turned out to be rather weak.</p>
<p>Similarly, the hypothesized association between loudness and the z-axis—relating to the distance of an object (Eitan and Granot,
<xref rid="B17" ref-type="bibr">2006</xref>
)—was almost non-existent for stimuli concurrently varied in pitch, loudness and tempo. One crucial difference between our experimental paradigm and that of Eitan and Granot—apart from the distinction between real and imagined movement—is possibly the fact that movement in Eitan and Granot's study involved the movement of an imagined humanoid character
<italic>in relation</italic>
to the stable position of the participant, whereas in the present study only one (real) person was involved. Even more importantly, moving forwards could be achieved either by moving only the arm or the whole body forwards. Thus, in both cases, though particularly in the latter, real sense of distance was unlikely to be involved.</p>
<p>Nevertheless, the analysis of stimuli without changes in pitch (Nos. 2 and 3) revealed a very clear pattern: increasing–decreasing loudness contours—but not decreasing–increasing loudness contours—are represented by movements along the z-axis such that an increase (decrease) in loudness led participants to move forward (backward). And, as observed several times before, musically trained participants showed higher scores than untrained participants, whose mean correlation coefficient in fact suggests a complete absence of associations between loudness and the z-axis.</p>
<p>The analysis of muscular energy revealed that participants' number of shaking events increased when the loudness increased and decreased when the loudness decreased. This finding is in line with previous studies investigating children's movements in response to sound stimuli (Kohn and Eitan,
<xref rid="B31" ref-type="bibr">2009</xref>
) and adult participants who used pressure on a pen in a drawing experiment to represent loudness (Küssner and Leech-Wilkinson,
<xref rid="B35" ref-type="bibr">2014</xref>
). Further support for the notion that loudness is associated with human movement comes from Todd et al. (
<xref rid="B72" ref-type="bibr">2000</xref>
) who report that a loud bass drum might affect the vestibular system and hence a person's sense of motion, and from Van Dyck et al. (
<xref rid="B74" ref-type="bibr">2013</xref>
) who showed that people modify their body movements according to the level of the bass drum when moving to contemporary dance music. Note that muscular energy—conceptualized in the present study as very fast (shaking) hand movements—does not account for instances in which muscles might be tense without any hand movements involved. Thus, in future studies, electromyography might be used to encompass further instances in which muscular energy is involved.</p>
</sec>
<sec>
<title>Speed of hand movement</title>
<p>Although pitch had been associated with speed in adjective matching (Walker and Smith,
<xref rid="B79" ref-type="bibr">1986</xref>
) and rating tasks before (Eitan and Timmers,
<xref rid="B21" ref-type="bibr">2010</xref>
), no such association was found in the present study. Similarly, there was no clear association between loudness and speed—a result that might have been biased by the stimuli involved in this analysis. One third of them—i.e., the ones with equal tempo (Nos. 4, 7, 10, 13, 16, 19)—included 1 s of unchanged pitch at the end of each half of a stimulus. Previous research has indicated that musically trained participants continue drawing a horizontal line when presented with pitch unchanged over time, while untrained participants stop drawing for a moment and only continue when pitch changes again (Küssner and Leech-Wilkinson,
<xref rid="B35" ref-type="bibr">2014</xref>
). The absence of this effect in the interaction between “quarter,” “tempo” and “training” suggests, however, that gesturing sounds produces different results from drawing sounds. It is possible that participants stopped gesturing briefly when reaching these points, creating a “slowing down” bias at the end of each half. It is most likely for the same reason that the speed of hand movement decreases across two quarters of a stimulus when the tempo is equal (see Table
<xref ref-type="table" rid="T2">2</xref>
). This potential bias notwithstanding, the fact that the speed of hand movement decreased when the loudness decreased and that the speed did not change when the loudness increased suggests that loudness did have an influence. At least partly, then, this finding suggests a gap between imagined and real bodily cross-modal mappings. While Eitan and Granot (
<xref rid="B17" ref-type="bibr">2006</xref>
) found no association between decreasing loudness and decreasing speed in a rating task, the present study, as well as that of Kohn and Eitan (
<xref rid="B31" ref-type="bibr">2009</xref>
), provides evidence for such a correspondence.</p>
<p>The association between tempo and speed of hand movement is more straightforward. With increasing tempo participants increase the speed of their hand movements, and with decreasing tempo they slow down. Musical training, however, significantly influences this effect, such that untrained participants do not show an increase in speed of hand movement when the tempo is accelerating but only a decrease in speed when the tempo is decelerating. While differences between musically trained and untrained participants pertaining to imagined speed have been reported before for stimuli varied in inter-onset intervals and articulation (Eitan and Granot,
<xref rid="B17" ref-type="bibr">2006</xref>
), the present interaction effect between tempo and training presents a novel finding.</p>
<p>Crucially, pitch influences the association between tempo and speed too. While the direction of pitch has no influence on the association between decelerating tempo and decrease in speed, falling pitch inhibits increase in speed in response to accelerating tempo. Note that falling pitch—represented by a downward hand movement—paired with accelerating tempo manifests the prototypical
<italic>physical</italic>
prerequisites for accelerated movement: an object (here the hand) accelerating toward the ground. There is, however, no increase in speed, which could be explained by semantics taking precedence over gravity. If falling pitch is conceived of as LESS and accelerating tempo is conceived of as MORE, this might create a semantic conflict, preventing the speed of hand movement from increasing. Another explanation could be the sense of intensity that is felt when various musical parameters interact. When musical parameters are aligned (e.g., falling pitch and decreasing tempo), the resulting change in speed mirrors the feeling of intensity that is created by this alignment (e.g., decrease in speed). When musical parameters are opposed, however, the resulting change in speed (if any) is much harder to predict, as it depends on the salience of individual musical parameters which, in their sum, determine whether one feels the intensity increasing, decreasing or perhaps ambiguous.</p>
<p>Taken together, these findings substantiate not only evidence of the association between tempo and speed in bodily cross-modal mappings (Kohn and Eitan,
<xref rid="B31" ref-type="bibr">2009</xref>
), but also provide new insights into how interactions of auditory features affect the resulting speed of the hand movement.</p>
</sec>
</sec>
<sec>
<title>General discussion</title>
<p>The findings from the present study provide further evidence that musical training is a factor influencing the consistency of cross-modal mappings. In line with previous research (Eitan and Granot,
<xref rid="B17" ref-type="bibr">2006</xref>
; Rusconi et al.,
<xref rid="B61" ref-type="bibr">2006</xref>
), both pitch—particularly falling–rising pitch—and loudness are mapped more consistently by musically trained participants. It needs to be tested to what extent sensorimotor skills play a role here (Küssner and Leech-Wilkinson,
<xref rid="B35" ref-type="bibr">2014</xref>
) and how auditory, tactile and motor perception interact when mapping sound features cross-modally in real-time. As this might depend on the spatial features of a certain instrument, it will be worthwhile comparing groups of different instrumentalists such as pianists and clarinetists in future experiments. What is more, musical notation might play a crucial role here, too, and it would be very valuable to compare cross-modal mappings of musicians who use notations with those who do not.</p>
<p>One recurring finding of the present study is the preference for convex shapes (increasing–decreasing contours). Although this effect was hypothesized for pitch mappings based on previous findings (Kohn and Eitan,
<xref rid="B31" ref-type="bibr">2009</xref>
), its pervasiveness in other mappings (e.g., of loudness) and more complex interactions between musical parameters suggests a prominent role in gestural cross-modal mappings. Drawing on findings from dance and movement therapy, Kestenberg-Amighi et al. (
<xref rid="B30" ref-type="bibr">1999</xref>
as discussed in Eitan,
<xref rid="B15" ref-type="bibr">2013a</xref>
) propose a general preference for inverted U-shape contours based on the natural tendency of the body—and its various functions, e.g., respiration, heart rate—to grow first before shrinking. Moreover, Kohn and Eitan (
<xref rid="B31" ref-type="bibr">2009</xref>
) remind us that “rise before fall” is also a commonly observed pattern in music that has been widely discussed in musicology. For instance, analysing a large database of Western folk songs, Huron (
<xref rid="B28" ref-type="bibr">1996</xref>
) showed that convex melodic shapes are much more common than any other melodic contour, and Leech-Wilkinson (
<xref rid="B36" ref-type="bibr">in press</xref>
) recently discussed the role of increasing and decreasing intensities (“feeling shapes”), drawing on Stern's psychoanalytic theory of Forms of Vitality (Stern,
<xref rid="B66" ref-type="bibr">2010</xref>
). Although speculative, this might reflect the fact that intensifying stimulus features are more salient than attenuating ones because they are more significant in the environment: an object accelerating poses a greater potential threat than an object that decelerates (see Neuhoff,
<xref rid="B53" ref-type="bibr">2001</xref>
, for a discussion of the adaptive value of changes in loudness). Thus, increasing stimulus properties in any sensory modality—higher, louder, brighter, warmer—imply the approach of a potentially harmful object, raising an organism's attention and alertness.</p>
<p>There are a few limitations which need to be considered when interpreting the current dataset and designing future studies. Generally, one needs to be conscious of the nature of the cross-modal mappings measured experimentally—whether spontaneous or, as it were, mandatory—since apart from the paradigm itself, the instruction may crucially influence what is being measured (Rusconi et al.,
<xref rid="B61" ref-type="bibr">2006</xref>
). We chose the expression “represent sound gesturally” over instructions emphasizing a more communicative aspect of the gestures, e.g., “while listening to the music, move to it in an appropriate way, such that another child could recognize the music while watching your movements without sound” (Kohn and Eitan,
<xref rid="B31" ref-type="bibr">2009</xref>
) or, pertaining to sound drawings, asking participants to “represent the sound on paper in such a way that if another member of their community saw their marks they should be able to connect them with the sound” (Athanasopoulos and Moran,
<xref rid="B1" ref-type="bibr">2013</xref>
). Although constituting seemingly negligible differences in instruction, the resulting drawings and gestures may give rise to different outcomes, particularly in a cross-cultural context as discussed by Eitan (
<xref rid="B16" ref-type="bibr">2013b</xref>
).</p>
<p>What is more, the design of the stimuli needs attention. First, it should be acknowledged that tempo variations were not completely systematized to avoid an exponential increase in experimental stimuli: when the tempo was changed it consisted either of two decelerandi or two accelerandi, but never of a mixture of both tempi. Secondly, when several auditory features were varied concurrently, change of direction always happened at the same time after 4 s. Needless to say, in musical performances there can be all sorts of overlaps (e.g., a slow crescendo over several rising–falling pitch glides including a short decelerando at the end), creating a complex interplay of increasing and decreasing intensities that our set of pure tones is unable to match. And while our stimuli could have been much more complex to come closer to real musical stimuli, they could have included simpler variations as well—e.g., a single pitch ascent with concurrently varied loudness or tempo—to study the basic gestural mappings in more details. Thus, there is scope for future studies to investigate both ends of the spectrum. Thirdly, and perhaps most crucially, when varying pitch, loudness and tempo concurrently, the variations of individual sound features might be differentially salient. That is, it matters whether the pitch range encompasses half an octave or four octaves, or whether the change in loudness occurs over 80 or 10% of the maximum amplitude. It is therefore not implausible that pitch—not loudness—was represented on the y-axis because it was perceptually more salient. Had the pitch range only included four semitones (or had it been in a different register) and the change in loudness been made more extreme, it might well have resulted in loudness vs. height associations. Researchers thus need to take great care when designing auditory stimuli that are varied in several sound features.</p>
<p>Finally, it should be pointed out that the findings presented here do not capture the unique ways in which participants might have represented sound gesturally, not only because the applied motion capture system is insensitive to fine-grained hand movements but also because participants might have used—consciously or subconsciously—other parts of their bodies to represent the sound. While the focus here was on averaged responses of hand movements to get insight into a largely under-researched field, the role of fine-grained movements of hands, fingers and other body parts provides a fruitful path to explore in future studies.</p>
</sec>
<sec>
<title>Conclusion and implications</title>
<p>In the present study we investigated gestural representations of pitch, loudness and tempo, providing a solid empirical basis for future studies concerned with bodily cross-modal mappings. We were able to show that musical training plays an important role in shaping bodily cross-modal mappings, e.g., giving rise to more consistent mappings and annulling the commonly observed bias for convex shapes. Loudness vs. distance associations appear to be less relevant if the opportunity is provided to link loudness to energy levels, which can be seen as the fundamental physical factor influencing amplitude (i.e., deflection of air molecules). Moreover, concurrently varied musical parameters have a significant effect on the ways in which people represent sound gesturally: interactions between pitch and loudness affect how participants adjust the speed of their hand movement. Recent theoretical refinements of action-perception couplings in music perception provide an adequate framework in which such interaction effects may be investigated further (Maes et al.,
<xref rid="B43" ref-type="bibr">2014</xref>
). While it remains to be seen what the underlying mechanisms (e.g., perceptual, semantic) of these bodily cross-modal mappings are, the findings provided here may provide further support for the existence of recently developed concepts within embodied music cognition such as Godøy's (
<xref rid="B26" ref-type="bibr">2006</xref>
) “gestural-sonorous objects,” emphasizing the interconnection of motion and sound features in the mind of the listener. Facilitated by advances in multimedia technology (Tan et al.,
<xref rid="B70" ref-type="bibr">2013</xref>
) and the development of new musical instruments, the increasingly complex role of movement in creating and manipulating sounds and music challenges findings of cross-modal correspondences that have been obtained with traditional paradigms. Future studies need to address whether findings from bodily cross-modal mappings can be integrated wholly into current theoretical frameworks or whether “embodied cross-modal correspondences” might form a separate category worth studying in its own right. Besides theoretical implications, the outcome of the present study, as well as its low-cost motion capture devices, may be used in clinical settings where sounds and music are used to co-ordinate movement. For instance, music-based movement therapy has been found to be effective in treating Parkinson's disease (Rochester et al.,
<xref rid="B59" ref-type="bibr">2010</xref>
; De Dreu et al.,
<xref rid="B11" ref-type="bibr">2012</xref>
), and therapeutic approaches to stroke may benefit from musical activities, as shown in a study using the Wii™ Remote Controller to develop new forms of interventions (Van Wijck et al.,
<xref rid="B75" ref-type="bibr">2012</xref>
).</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>This work was supported by King's College London and by the AHRC Research Center for Musical Performance as Creative Practice (grant number RC/AH/D502527/1).</p>
</ack>
<sec sec-type="supplementary-material" id="s5">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at:
<ext-link ext-link-type="uri" xlink:href="http://www.frontiersin.org/journal/10.3389/fpsyg.2014.00789/abstract">http://www.frontiersin.org/journal/10.3389/fpsyg.2014.00789/abstract</ext-link>
</p>
<supplementary-material content-type="local-data" id="SM1">
<media xlink:href="DataSheet1.ZIP">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM2">
<media xlink:href="DataSheet2.ZIP">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Athanasopoulos</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Moran</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Cross-cultural representations of musical shape</article-title>
.
<source>Empir. Musicol. Rev</source>
.
<volume>8</volume>
,
<fpage>185</fpage>
<lpage>199</lpage>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ben-Artzi</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Marks</surname>
<given-names>L. E.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Visual-auditory interaction in speeded classification: Role of stimulus difference</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>57</volume>
,
<fpage>1151</fpage>
<lpage>1162</lpage>
<pub-id pub-id-type="doi">10.3758/BF03208371</pub-id>
<pub-id pub-id-type="pmid">8539090</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bernstein</surname>
<given-names>I. H.</given-names>
</name>
<name>
<surname>Edelstein</surname>
<given-names>B. A.</given-names>
</name>
</person-group>
(
<year>1971</year>
).
<article-title>Effects of some variations in auditory input upon visual choice reaction time</article-title>
.
<source>J. Exp. Psychol</source>
.
<volume>87</volume>
,
<fpage>241</fpage>
<lpage>247</lpage>
<pub-id pub-id-type="doi">10.1037/h0030524</pub-id>
<pub-id pub-id-type="pmid">5542226</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Boersma</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Weenink</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<source>Praat: doing phonetics by computer</source>
. Version 5.3.15.</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bregman</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Steiger</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>1980</year>
).
<article-title>Auditory streaming and vertical localization: Interdependence of “what” and “where” decisions in audition</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>28</volume>
,
<fpage>539</fpage>
<lpage>546</lpage>
<pub-id pub-id-type="doi">10.3758/BF03198822</pub-id>
<pub-id pub-id-type="pmid">7208267</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cabrera</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Morimoto</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Influence of fundamental frequency and source elevation on the vertical localization of complex tones and complex tone pairs</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>122</volume>
,
<fpage>478</fpage>
<lpage>488</lpage>
<pub-id pub-id-type="doi">10.1121/1.2736782</pub-id>
<pub-id pub-id-type="pmid">17614505</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Caramiaux</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Bevilacqua</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Bianco</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Schnell</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Houix</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Susini</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>The role of sound source perception in gestural sound description</article-title>
.
<source>ACM Trans. Appl. Percept</source>
.
<volume>11</volume>
,
<fpage>1</fpage>
<pub-id pub-id-type="doi">10.1145/2536811</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Casasanto</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Phillips</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Boroditsky</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Do we think about music in terms of space? Metaphoric representation of musical pitch</article-title>
, in
<source>25th Annual Conference of the Cognitive Science Society</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Alterman</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Kirsh</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<publisher-loc>Boston, MA</publisher-loc>
:
<publisher-name>Cognitive Science Society</publisher-name>
),
<fpage>1323</fpage>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chiou</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Rich</surname>
<given-names>A. N.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Cross-modality correspondence between pitch and spatial location modulates attentional orienting</article-title>
.
<source>Perception</source>
<volume>41</volume>
,
<fpage>339</fpage>
<lpage>353</lpage>
<pub-id pub-id-type="doi">10.1068/p7161</pub-id>
<pub-id pub-id-type="pmid">22808586</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Collier</surname>
<given-names>W. G.</given-names>
</name>
<name>
<surname>Hubbard</surname>
<given-names>T. L.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Judgments of happiness, brightness, speed and tempo change of auditory stimuli varying in pitch and tempo</article-title>
.
<source>Psychomusicol. Music Mind Brain</source>
<volume>17</volume>
,
<fpage>36</fpage>
<lpage>55</lpage>
<pub-id pub-id-type="doi">10.1037/h0094060</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>De Dreu</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Van Der Wilk</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Poppe</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kwakkel</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Van Wegen</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Rehabilitation, exercise therapy and music in patients with Parkinson's disease: a meta-analysis of the effects of music-based movement therapy on walking ability, balance and quality of life</article-title>
.
<source>Parkinsonism Relat. Disord</source>
.
<volume>18</volume>
,
<fpage>S114</fpage>
<lpage>S119</lpage>
<pub-id pub-id-type="doi">10.1016/S1353-8020(11)70036-0</pub-id>
<pub-id pub-id-type="pmid">22166406</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Deroy</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Auvray</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>A new Molyneux's problem: sounds, shapes and arbitrary crossmodal correspondences</article-title>
, in
<source>Second International Workshop The Shape of Things</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Kutz</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Bhatt</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Borgo</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Santos</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<publisher-loc>Rio de Janeiro</publisher-loc>
).</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Dolscheid</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hunnius</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Casasanto</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Majid</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The sound of thickness: prelinguistic infants' associations of space and pitch</article-title>
, in
<source>34th Annual Meeting of the Cognitive Science Society</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Miyake</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Peebles</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Cooper</surname>
<given-names>R. P.</given-names>
</name>
</person-group>
(
<publisher-loc>Austin, TX</publisher-loc>
:
<publisher-name>Cognitive Science Society</publisher-name>
),
<fpage>306</fpage>
<lpage>311</lpage>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dolscheid</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Shayan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Majid</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Casasanto</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>The thickness of musical pitch: Psychophysical evidence for linguistic relativity</article-title>
.
<source>Psychol. Sci</source>
.
<volume>24</volume>
,
<fpage>613</fpage>
<lpage>621</lpage>
<pub-id pub-id-type="doi">10.1177/0956797612457374</pub-id>
<pub-id pub-id-type="pmid">23538914</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Eitan</surname>
<given-names>Z.</given-names>
</name>
</person-group>
(
<year>2013a</year>
).
<article-title>How pitch and loudness shape musical space and motion: new findings and persisting questions</article-title>
, in
<source>The Psychology of Music in Multimedia</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Tan</surname>
<given-names>S.-L.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Lipscomb</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kendall</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>161</fpage>
<lpage>187</lpage>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eitan</surname>
<given-names>Z.</given-names>
</name>
</person-group>
(
<year>2013b</year>
).
<article-title>Musical objects, cross-domain correspondences, and cultural choice: commentary on “Cross-cultural representations of musical shape” by George Athanasopoulos and Nikki Moran</article-title>
.
<source>Empir. Musicol. Rev</source>
.
<volume>8</volume>
,
<fpage>204</fpage>
<lpage>207</lpage>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eitan</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Granot</surname>
<given-names>R. Y.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>How music moves: musical parameters and listeners' images of motion</article-title>
.
<source>Music Percept</source>
.
<volume>23</volume>
,
<fpage>221</fpage>
<lpage>248</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2006.23.3.221</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Eitan</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Granot</surname>
<given-names>R. Y.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Listeners' images of motion and the interaction of musical parameters</article-title>
, in
<source>10th Conference of the Society for Music Perception and Cognition (SMPC)</source>
(
<publisher-loc>Rochester, NY</publisher-loc>
).</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eitan</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Schupak</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gotler</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Marks</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Lower pitch is larger, yet falling pitches shrink: Interaction of pitch change and size change in speeded discrimination</article-title>
.
<source>Exp. Psychol</source>
. [Epub ahead of print].
<pub-id pub-id-type="doi">10.1027/1618-3169/a000246</pub-id>
<pub-id pub-id-type="pmid">24351984</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Eitan</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Schupak</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Marks</surname>
<given-names>L. E.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Louder is higher: cross-modal interaction of loudness change and vertical motion in speeded classification</article-title>
, in
<source>10th International Conference on Music Perception and Cognition</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Miyazaki</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Adachi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hiraga</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Nakajima</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Tsuzaki</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<publisher-loc>Adelaide, SA</publisher-loc>
:
<publisher-name>Causal Productions</publisher-name>
),
<fpage>67</fpage>
<lpage>76</lpage>
(published as a CD-ROM).</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eitan</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Timmers</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Beethoven's last piano sonata and those who follow crocodiles: cross-domain mappings of auditory pitch in a musical context</article-title>
.
<source>Cognition</source>
<volume>114</volume>
,
<fpage>405</fpage>
<lpage>422</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2009.10.013</pub-id>
<pub-id pub-id-type="pmid">20036356</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Learning to integrate arbitrary signals from vision and touch</article-title>
.
<source>J. Vis</source>
.
<volume>7</volume>
,
<fpage>1</fpage>
<lpage>14</lpage>
<pub-id pub-id-type="doi">10.1167/7.5.7</pub-id>
<pub-id pub-id-type="pmid">18217847</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Evans</surname>
<given-names>K. K.</given-names>
</name>
<name>
<surname>Treisman</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Natural cross-modal mappings between visual and auditory features</article-title>
.
<source>J. Vis</source>
.
<volume>10</volume>
,
<fpage>1</fpage>
<lpage>12</lpage>
<pub-id pub-id-type="doi">10.1167/10.1.6</pub-id>
<pub-id pub-id-type="pmid">20143899</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Fry</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Reas</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<source>Processing</source>
. [Online]. Available online at:
<ext-link ext-link-type="uri" xlink:href="http://processing.org">http://processing.org</ext-link>
(Accessed July 15, 2011).</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gallace</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Multisensory synesthetic interactions in the speeded classification of visual size</article-title>
.
<source>Atten. Percept. Psychophys</source>
.
<volume>68</volume>
,
<fpage>1191</fpage>
<lpage>1203</lpage>
<pub-id pub-id-type="doi">10.3758/BF03193720</pub-id>
<pub-id pub-id-type="pmid">17355042</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Godøy</surname>
<given-names>R. I.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Gestural-sonorous objects: Embodied extensions of Schaeffer's conceptual apparatus</article-title>
.
<source>Organ. Sound</source>
<volume>11</volume>
,
<fpage>149</fpage>
<lpage>157</lpage>
<pub-id pub-id-type="doi">10.1017/S1355771806001439</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Godøy</surname>
<given-names>R. I.</given-names>
</name>
<name>
<surname>Haga</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Jensenius</surname>
<given-names>A. R.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Exploring music-related gestures by sound-tracing: a preliminary study</article-title>
, in
<source>2nd ConGAS International Symposium on Gesture Interfaces for Multimedia Systems</source>
(
<publisher-loc>Leeds</publisher-loc>
).</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huron</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>The melodic arch in Western folksongs</article-title>
.
<source>Comput. Musicol</source>
.
<volume>10</volume>
,
<fpage>3</fpage>
<lpage>23</lpage>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Larson</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>“Something in the way she moves”-metaphors of musical motion</article-title>
.
<source>Metaphor Symbol</source>
<volume>18</volume>
,
<fpage>63</fpage>
<lpage>84</lpage>
<pub-id pub-id-type="doi">10.1207/S15327868MS1802_1</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="book">
<person-group person-group-type="editor">
<name>
<surname>Kestenberg-Amighi</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Loman</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lewis</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Sossin</surname>
<given-names>K. M.</given-names>
</name>
</person-group>
(eds.). (
<year>1999</year>
).
<source>The Meaning of Movement: Developmental and Clinical Perspectives of the Kestenberg Movement Profile</source>
.
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Brunner-Routledge</publisher-name>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kohn</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Eitan</surname>
<given-names>Z.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Musical parameters and children's movement responses</article-title>
, in
<source>7th Triennial Conference of the European Society for the Cognitive Sciences of Music</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Louhivuori</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Eerola</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Saarikallio</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Himberg</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Eerola</surname>
<given-names>P. S.</given-names>
</name>
</person-group>
(
<publisher-loc>Jyväskylä</publisher-loc>
).</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kohn</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Eitan</surname>
<given-names>Z.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Seeing sound moving: congruence of pitch and loudness with human movement and visual shape</article-title>
, in
<source>12th International Conference on Music Perception and Cognition/8th Triennial Conference of the European Society for the Cognitive Sciences of Music</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Cambouropoulos</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Tsougras</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Mavromatis</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Pastiadis</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<publisher-loc>Thessaloniki</publisher-loc>
:
<publisher-name>The School of Music Studies, Aristotle University of Thessaloniki</publisher-name>
),
<fpage>541</fpage>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kozak</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nymoen</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Godøy</surname>
<given-names>R. I.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Effects of spectral features of sound on gesture type and timing</article-title>
, in
<source>Gesture and Sign Language in Human-Computer Interaction and Embodied Communication</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Efthimiou</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kouroupetroglou</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Fotinea</surname>
<given-names>S.-E.</given-names>
</name>
</person-group>
(
<publisher-loc>Berlin</publisher-loc>
:
<publisher-name>Springer</publisher-name>
),
<fpage>69</fpage>
<lpage>80</lpage>
<pub-id pub-id-type="doi">10.1007/978-3-642-34182-3_7</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Küssner</surname>
<given-names>M. B.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<source>Shape, Drawing and Gesture: Cross-Modal Mappings of Sound and Music</source>
. Ph.D. thesis, King's College London, London.</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Küssner</surname>
<given-names>M. B.</given-names>
</name>
<name>
<surname>Leech-Wilkinson</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Investigating the influence of musical training on cross-modal correspondences and sensorimotor skills in a real-time drawing paradigm</article-title>
.
<source>Psychol. Music</source>
<volume>42</volume>
,
<fpage>448</fpage>
<lpage>469</lpage>
<pub-id pub-id-type="doi">10.1177/0305735613482022</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Leech-Wilkinson</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>in press</year>
).
<article-title>Shape and feeling</article-title>
, in
<source>Music and Shape</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Leech-Wilkinson</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Prior</surname>
<given-names>H. M.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
).</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lega</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Cattaneo</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Merabet</surname>
<given-names>L. B.</given-names>
</name>
<name>
<surname>Vecchi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Cucchi</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Pitch height modulates visual and haptic bisection performance in musicians</article-title>
.
<source>Front. Hum. Neurosci</source>
.
<volume>8</volume>
:
<issue>250</issue>
<pub-id pub-id-type="doi">10.3389/fnhum.2014.00250</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewkowicz</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Turkewitz</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1980</year>
).
<article-title>Cross-modal equivalence in early infancy: auditory-visual intensity matching</article-title>
.
<source>Dev. Psychol</source>
.
<volume>16</volume>
,
<fpage>597</fpage>
<lpage>607</lpage>
<pub-id pub-id-type="doi">10.1037/0012-1649.16.6.597</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lidji</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kolinsky</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Lochy</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Morais</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Spatial associations for musical stimuli: a piano in the head?</article-title>
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>33</volume>
,
<fpage>1189</fpage>
<lpage>1207</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.33.5.1189</pub-id>
<pub-id pub-id-type="pmid">17924817</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lipscomb</surname>
<given-names>S. D.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>E. M.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Perceived match between visual parameters and auditory correlates: an experimental multimedia investigation</article-title>
, in
<source>8th International Conference on Music Perception and Cognition</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Lipscomb</surname>
<given-names>S. D.</given-names>
</name>
<name>
<surname>Ashley</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Gjerdingen</surname>
<given-names>R. O.</given-names>
</name>
<name>
<surname>Webster</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<publisher-loc>Adelaide, SA</publisher-loc>
:
<publisher-name>Causal Productions</publisher-name>
),
<fpage>72</fpage>
<lpage>75</lpage>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ludwig</surname>
<given-names>V. U.</given-names>
</name>
<name>
<surname>Adachi</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Matsuzawa</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Visuoauditory mappings between high luminance and high pitch are shared by chimpanzees (Pan troglodytes) and humans</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>108</volume>
,
<fpage>20661</fpage>
<lpage>20665</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.1112605108</pub-id>
<pub-id pub-id-type="pmid">22143791</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maeda</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kanai</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Changing pitch induced visual motion illusion</article-title>
.
<source>Curr. Biol</source>
.
<volume>14</volume>
,
<fpage>R990</fpage>
<lpage>R991</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2004.11.018</pub-id>
<pub-id pub-id-type="pmid">15589145</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maes</surname>
<given-names>P.-J.</given-names>
</name>
<name>
<surname>Leman</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Palmer</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Wanderley</surname>
<given-names>M. M.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Action-based effects on music perception</article-title>
.
<source>Front. Psychol</source>
.
<volume>4</volume>
:
<issue>1008</issue>
<pub-id pub-id-type="doi">10.3389/fpsyg.2013.01008</pub-id>
<pub-id pub-id-type="pmid">24454299</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Marks</surname>
<given-names>L. E.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Cross-modal interactions in speeded classification</article-title>
, in
<source>Handbook of Multisensory Processes</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Calvert</surname>
<given-names>G. A.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
),
<fpage>85</fpage>
<lpage>105</lpage>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Martino</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Marks</surname>
<given-names>L. E.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Perceptual and linguistic interactions in speeded classification: tests of the semantic coding hypothesis</article-title>
.
<source>Perception</source>
<volume>28</volume>
,
<fpage>903</fpage>
<lpage>923</lpage>
<pub-id pub-id-type="doi">10.1068/p2866</pub-id>
<pub-id pub-id-type="pmid">10664781</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Melara</surname>
<given-names>R. D.</given-names>
</name>
<name>
<surname>O'Brien</surname>
<given-names>T. P.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>Interaction between synesthetically corresponding dimensions</article-title>
.
<source>J. Exp. Psychol. Gen</source>
.
<volume>116</volume>
,
<fpage>323</fpage>
<lpage>336</lpage>
<pub-id pub-id-type="doi">10.1037/0096-3445.116.4.323</pub-id>
<pub-id pub-id-type="pmid">2522534</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Micheyl</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Delhommeau</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Perrot</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Oxenham</surname>
<given-names>A. J.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Influence of musical and psychoacoustical training on pitch discrimination</article-title>
.
<source>Hear. Res</source>
.
<volume>219</volume>
,
<fpage>36</fpage>
<lpage>47</lpage>
<pub-id pub-id-type="doi">10.1016/j.heares.2006.05.004</pub-id>
<pub-id pub-id-type="pmid">16839723</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miller</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Werner</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wapner</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1958</year>
).
<article-title>Studies in physiognomic perception: V. Effect of ascending and descending gliding tones on autokinetic motion</article-title>
.
<source>J. Psychol</source>
.
<volume>46</volume>
,
<fpage>101</fpage>
<lpage>105</lpage>
<pub-id pub-id-type="doi">10.1080/00223980.1958.9916273</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miller</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Channel interaction and the redundant-targets effect in bimodal divided attention</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>17</volume>
,
<fpage>160</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.17.1.160</pub-id>
<pub-id pub-id-type="pmid">1826309</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mondloch</surname>
<given-names>C. J.</given-names>
</name>
<name>
<surname>Maurer</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Do small white balls squeak? Pitch-object correspondences in young children</article-title>
.
<source>Cogn. Affect. Behav. Neurosci</source>
.
<volume>4</volume>
,
<fpage>133</fpage>
<lpage>136</lpage>
<pub-id pub-id-type="doi">10.3758/CABN.4.2.133</pub-id>
<pub-id pub-id-type="pmid">15460920</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mossbridge</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Grabowecky</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Suzuki</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Changes in auditory frequency guide visual–spatial attention</article-title>
.
<source>Cognition</source>
<volume>121</volume>
,
<fpage>133</fpage>
<lpage>139</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2011.06.003</pub-id>
<pub-id pub-id-type="pmid">21741633</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mudd</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>1963</year>
).
<article-title>Spatial stereotypes of four dimensions of pure tone</article-title>
.
<source>J. Exp. Psychol</source>
.
<volume>66</volume>
,
<fpage>347</fpage>
<lpage>352</lpage>
<pub-id pub-id-type="doi">10.1037/h0040045</pub-id>
<pub-id pub-id-type="pmid">14051851</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Neuhoff</surname>
<given-names>J. G.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>An adaptive bias in the perception of looming auditory motion</article-title>
.
<source>Ecol. Psychol</source>
.
<volume>13</volume>
,
<fpage>87</fpage>
<lpage>110</lpage>
<pub-id pub-id-type="doi">10.1207/S15326969ECO1302_2</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Nymoen</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Caramiaux</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Kozak</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Torresen</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Analyzing sound tracings—a multimodal approach to music information retrieval</article-title>
, in
<source>1st International ACM Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies (MIRUM)</source>
(
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>ACM</publisher-name>
),
<fpage>39</fpage>
<lpage>44</lpage>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nymoen</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Godøy</surname>
<given-names>R. I.</given-names>
</name>
<name>
<surname>Jensenius</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Torresen</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Analyzing correspondence between sound objects and body motion</article-title>
.
<source>ACM Trans. Appl. Percept</source>
.
<volume>10</volume>
,
<fpage>9</fpage>
<pub-id pub-id-type="doi">10.1145/2465780.2465783</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patching</surname>
<given-names>G. R.</given-names>
</name>
<name>
<surname>Quinlan</surname>
<given-names>P. T.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Garner and congruence effects in the speeded classification of bimodal signals</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>28</volume>
,
<fpage>755</fpage>
<lpage>775</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.28.4.755</pub-id>
<pub-id pub-id-type="pmid">12190249</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pedley</surname>
<given-names>P. E.</given-names>
</name>
<name>
<surname>Harper</surname>
<given-names>R. S.</given-names>
</name>
</person-group>
(
<year>1959</year>
).
<article-title>Pitch and the vertical localization of sound</article-title>
.
<source>Am. J. Psychol</source>
.
<volume>72</volume>
,
<fpage>447</fpage>
<lpage>449</lpage>
<pub-id pub-id-type="doi">10.2307/1420051</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pratt</surname>
<given-names>C. C.</given-names>
</name>
</person-group>
(
<year>1930</year>
).
<article-title>The spatial character of high and low tones</article-title>
.
<source>J. Exp. Psychol</source>
.
<volume>13</volume>
,
<fpage>278</fpage>
<lpage>285</lpage>
<pub-id pub-id-type="doi">10.1037/h0072651</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rochester</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Baker</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Hetherington</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Willems</surname>
<given-names>A.-M.</given-names>
</name>
<name>
<surname>Kwakkel</surname>
<given-names>G.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2010</year>
).
<article-title>Evidence for motor learning in Parkinson's disease: acquisition, automaticity and retention of cued gait performance after training with external rhythmical cues</article-title>
.
<source>Brain Res</source>
.
<volume>1319</volume>
,
<fpage>103</fpage>
<lpage>111</lpage>
<pub-id pub-id-type="doi">10.1016/j.brainres.2010.01.001</pub-id>
<pub-id pub-id-type="pmid">20064492</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roffler</surname>
<given-names>S. K.</given-names>
</name>
<name>
<surname>Butler</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>1968</year>
).
<article-title>Localization of tonal stimuli in the vertical plane</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>43</volume>
,
<fpage>1260</fpage>
<lpage>1266</lpage>
<pub-id pub-id-type="doi">10.1121/1.1910977</pub-id>
<pub-id pub-id-type="pmid">5659494</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rusconi</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kwan</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Giordano</surname>
<given-names>B. L.</given-names>
</name>
<name>
<surname>Umiltà</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Butterworth</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Spatial representation of pitch height: the SMARC effect</article-title>
.
<source>Cognition</source>
<volume>99</volume>
,
<fpage>113</fpage>
<lpage>129</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2005.01.004</pub-id>
<pub-id pub-id-type="pmid">15925355</pub-id>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schubert</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Correlation analysis of continuous emotional response to music: correcting for the effects of serial correlation</article-title>
.
<source>Music. Sci</source>
.
<volume>6</volume>
,
<fpage>213</fpage>
<lpage>236</lpage>
<pub-id pub-id-type="doi">10.1177/10298649020050S108</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smalley</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Spectromorphology: explaining sound-shapes</article-title>
.
<source>Organised Sound</source>
<volume>2</volume>
,
<fpage>107</fpage>
<lpage>126</lpage>
<pub-id pub-id-type="doi">10.1017/S1355771897009059</pub-id>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Crossmodal correspondences: a tutorial review</article-title>
.
<source>Atten. Percept. Psychophys</source>
.
<volume>73</volume>
,
<fpage>971</fpage>
<lpage>995</lpage>
<pub-id pub-id-type="doi">10.3758/s13414-010-0073-7</pub-id>
<pub-id pub-id-type="pmid">21264748</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Deroy</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Crossmodal correspondences: innate or learned?</article-title>
<source>Iperception</source>
<volume>3</volume>
,
<fpage>316</fpage>
<lpage>318</lpage>
<pub-id pub-id-type="doi">10.1068/i0526ic</pub-id>
<pub-id pub-id-type="pmid">23145286</pub-id>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stern</surname>
<given-names>D. N.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<source>Forms of Vitality: Exploring Dynamic Experience in Psychology, The Arts, Psychotherapy, and Development</source>
.
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stewart</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Verdonschot</surname>
<given-names>R. G.</given-names>
</name>
<name>
<surname>Nasralla</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Lanipekun</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Action-perception coupling in pianists: learned mappings or spatial musical association of response codes (SMARC) effect?</article-title>
<source>Q. J. Exp. Psychol</source>
.
<volume>66</volume>
,
<fpage>37</fpage>
<lpage>50</lpage>
<pub-id pub-id-type="doi">10.1080/17470218.2012.687385</pub-id>
<pub-id pub-id-type="pmid">22712516</pub-id>
</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stumpf</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1883</year>
).
<source>Tonpsychologie</source>
.
<publisher-loc>Leipzig</publisher-loc>
:
<publisher-name>S. Hirzel</publisher-name>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Suzuki</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Takeshima</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Equal-loudness-level contours for pure tones</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>116</volume>
,
<fpage>918</fpage>
<lpage>933</lpage>
<pub-id pub-id-type="doi">10.1121/1.1763601</pub-id>
<pub-id pub-id-type="pmid">15376658</pub-id>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Tan</surname>
<given-names>S.-L.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Lipscomb</surname>
<given-names>S. D.</given-names>
</name>
<name>
<surname>Kendall</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Future research directions for music and sound in multimedia</article-title>
, in
<source>The Psychology of Music in Multimedia</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Tan</surname>
<given-names>S.-L.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Lipscomb</surname>
<given-names>S. D.</given-names>
</name>
<name>
<surname>Kendall</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>391</fpage>
<lpage>406</lpage>
</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Taylor</surname>
<given-names>J. E.</given-names>
</name>
<name>
<surname>Witt</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Listening to music primes space: pianists, but not novices, simulate heard actions</article-title>
.
<source>Psychol. Res</source>
. [Epub ahead of print].
<pub-id pub-id-type="doi">10.1007/s00426-014-0544-x</pub-id>
<pub-id pub-id-type="pmid">24510162</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Todd</surname>
<given-names>N. P. M.</given-names>
</name>
<name>
<surname>Cody</surname>
<given-names>F. W. J.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>J. R.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>A saccular origin of frequency tuning in myogenic vestibular evoked potentials?: implications for human responses to loud sounds</article-title>
.
<source>Hear. Res</source>
.
<volume>141</volume>
,
<fpage>180</fpage>
<lpage>188</lpage>
<pub-id pub-id-type="doi">10.1016/S0378-5955(99)00222-1</pub-id>
<pub-id pub-id-type="pmid">10713506</pub-id>
</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Trimble</surname>
<given-names>O. C.</given-names>
</name>
</person-group>
(
<year>1934</year>
).
<article-title>Localization of sound in the anterior-posterior and vertical dimensions of “auditory” space</article-title>
.
<source>Br. J. Psychol. Gen</source>
.
<volume>24</volume>
,
<fpage>320</fpage>
<lpage>334</lpage>
<pub-id pub-id-type="doi">10.1111/j.2044-8295.1934.tb00706.x</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Dyck</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Moelants</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Demey</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Deweppe</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Coussement</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Leman</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>The impact of the bass drum on human dance movement</article-title>
.
<source>Music Percept</source>
.
<volume>30</volume>
,
<fpage>349</fpage>
<lpage>359</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2013.30.4.349</pub-id>
</mixed-citation>
</ref>
<ref id="B75">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Wijck</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Knox</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Dodds</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Cassidy</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Alexander</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Macdonald</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Making music after stroke: using musical activities to enhance arm function</article-title>
.
<source>Ann. N.Y. Acad. Sci</source>
.
<volume>1252</volume>
,
<fpage>305</fpage>
<lpage>311</lpage>
<pub-id pub-id-type="doi">10.1111/j.1749-6632.2011.06403.x</pub-id>
<pub-id pub-id-type="pmid">22524372</pub-id>
</mixed-citation>
</ref>
<ref id="B76">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vines</surname>
<given-names>B. W.</given-names>
</name>
<name>
<surname>Krumhansl</surname>
<given-names>C. L.</given-names>
</name>
<name>
<surname>Wanderley</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Levitin</surname>
<given-names>D. J.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Cross-modal interactions in the perception of musical performance</article-title>
.
<source>Cognition</source>
<volume>101</volume>
,
<fpage>80</fpage>
<lpage>113</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2005.09.003</pub-id>
<pub-id pub-id-type="pmid">16289067</pub-id>
</mixed-citation>
</ref>
<ref id="B77">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wagner</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Dobkins</surname>
<given-names>K. R.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Synaesthetic associations decrease during infancy</article-title>
.
<source>Psychol. Sci</source>
.
<volume>22</volume>
,
<fpage>1067</fpage>
<lpage>1072</lpage>
<pub-id pub-id-type="doi">10.1177/0956797611416250</pub-id>
<pub-id pub-id-type="pmid">21771964</pub-id>
</mixed-citation>
</ref>
<ref id="B78">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walker</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Bremner</surname>
<given-names>J. G.</given-names>
</name>
<name>
<surname>Mason</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Spring</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mattock</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Slater</surname>
<given-names>A.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2010</year>
).
<article-title>Preverbal infants' sensitivity to synaesthetic cross-modality correspondences</article-title>
.
<source>Psychol. Sci</source>
.
<volume>21</volume>
,
<fpage>21</fpage>
<lpage>25</lpage>
<pub-id pub-id-type="doi">10.1177/0956797609354734</pub-id>
<pub-id pub-id-type="pmid">20424017</pub-id>
</mixed-citation>
</ref>
<ref id="B79">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walker</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1986</year>
).
<article-title>The basis of Stroop interference involving the multimodal correlates of auditory pitch</article-title>
.
<source>Perception</source>
<volume>15</volume>
,
<fpage>491</fpage>
<lpage>496</lpage>
<pub-id pub-id-type="doi">10.1068/p150491</pub-id>
<pub-id pub-id-type="pmid">3822735</pub-id>
</mixed-citation>
</ref>
<ref id="B80">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walker</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>The effects of culture, environment, age, and musical training on choices of visual metaphors for sound</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>42</volume>
,
<fpage>491</fpage>
<lpage>502</lpage>
<pub-id pub-id-type="doi">10.3758/BF03209757</pub-id>
<pub-id pub-id-type="pmid">2447557</pub-id>
</mixed-citation>
</ref>
<ref id="B81">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Widmann</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kujala</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kujala</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Schröger</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>From symbols to sounds: visual symbolic information activates sound representations</article-title>
.
<source>Psychophysiology</source>
<volume>41</volume>
,
<fpage>709</fpage>
<lpage>715</lpage>
<pub-id pub-id-type="doi">10.1111/j.1469-8986.2004.00208.x</pub-id>
<pub-id pub-id-type="pmid">15318877</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Kussner, Mats B" sort="Kussner, Mats B" uniqKey="Kussner M" first="Mats B." last="Küssner">Mats B. Küssner</name>
<name sortKey="Leech Wilkinson, Daniel" sort="Leech Wilkinson, Daniel" uniqKey="Leech Wilkinson D" first="Daniel" last="Leech-Wilkinson">Daniel Leech-Wilkinson</name>
<name sortKey="Prior, Helen M" sort="Prior, Helen M" uniqKey="Prior H" first="Helen M." last="Prior">Helen M. Prior</name>
<name sortKey="Tidhar, Dan" sort="Tidhar, Dan" uniqKey="Tidhar D" first="Dan" last="Tidhar">Dan Tidhar</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003210 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 003210 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:4112934
   |texte=   Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:25120506" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024