Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Thresholds of Auditory-Motor Coupling Measured with a Simple Task in Musicians and Non-Musicians: Was the Sound Simultaneous to the Key Press?

Identifieur interne : 002330 ( Pmc/Curation ); précédent : 002329; suivant : 002331

Thresholds of Auditory-Motor Coupling Measured with a Simple Task in Musicians and Non-Musicians: Was the Sound Simultaneous to the Key Press?

Auteurs : Floris T. Van Vugt [France, Allemagne] ; Barbara Tillmann [France]

Source :

RBID : PMC:3911931

Abstract

The human brain is able to predict the sensory effects of its actions. But how precise are these predictions? The present research proposes a tool to measure thresholds between a simple action (keystroke) and a resulting sound. On each trial, participants were required to press a key. Upon each keystroke, a woodblock sound was presented. In some trials, the sound came immediately with the downward keystroke; at other times, it was delayed by a varying amount of time. Participants were asked to verbally report whether the sound came immediately or was delayed. Participants' delay detection thresholds (in msec) were measured with a staircase-like procedure. We hypothesised that musicians would have a lower threshold than non-musicians. Comparing pianists and brass players, we furthermore hypothesised that, as a result of a sharper attack of the timbre of their instrument, pianists might have lower thresholds than brass players. Our results show that non-musicians exhibited higher thresholds for delay detection (180±104 ms) than the two groups of musicians (102±65 ms), but there were no differences between pianists and brass players. The variance in delay detection thresholds could be explained by variance in sensorimotor synchronisation capacities as well as variance in a purely auditory temporal irregularity detection measure. This suggests that the brain's capacity to generate temporal predictions of sensory consequences can be decomposed into general temporal prediction capacities together with auditory-motor coupling. These findings indicate that the brain has a relatively large window of integration within which an action and its resulting effect are judged as simultaneous. Furthermore, musical expertise may narrow this window down, potentially due to a more refined temporal prediction. This novel paradigm provides a simple test to estimate the temporal precision of auditory-motor action-effect coupling, and the paradigm can readily be incorporated in studies investigating both healthy and patient populations.


Url:
DOI: 10.1371/journal.pone.0087176
PubMed: 24498299
PubMed Central: 3911931

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3911931

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Thresholds of Auditory-Motor Coupling Measured with a Simple Task in Musicians and Non-Musicians: Was the Sound Simultaneous to the Key Press?</title>
<author>
<name sortKey="Van Vugt, Floris T" sort="Van Vugt, Floris T" uniqKey="Van Vugt F" first="Floris T." last="Van Vugt">Floris T. Van Vugt</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CNRS-UMR 5292, INSERM U1028, University Claude Bernard Lyon-1, Lyon, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CNRS-UMR 5292, INSERM U1028, University Claude Bernard Lyon-1, Lyon</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Institute of Music Physiology and Musicians' Medicine, University of Music, Drama and Media, Hannover, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute of Music Physiology and Musicians' Medicine, University of Music, Drama and Media, Hannover</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Tillmann, Barbara" sort="Tillmann, Barbara" uniqKey="Tillmann B" first="Barbara" last="Tillmann">Barbara Tillmann</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CNRS-UMR 5292, INSERM U1028, University Claude Bernard Lyon-1, Lyon, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CNRS-UMR 5292, INSERM U1028, University Claude Bernard Lyon-1, Lyon</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24498299</idno>
<idno type="pmc">3911931</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3911931</idno>
<idno type="RBID">PMC:3911931</idno>
<idno type="doi">10.1371/journal.pone.0087176</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">002330</idno>
<idno type="wicri:Area/Pmc/Curation">002330</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Thresholds of Auditory-Motor Coupling Measured with a Simple Task in Musicians and Non-Musicians: Was the Sound Simultaneous to the Key Press?</title>
<author>
<name sortKey="Van Vugt, Floris T" sort="Van Vugt, Floris T" uniqKey="Van Vugt F" first="Floris T." last="Van Vugt">Floris T. Van Vugt</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CNRS-UMR 5292, INSERM U1028, University Claude Bernard Lyon-1, Lyon, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CNRS-UMR 5292, INSERM U1028, University Claude Bernard Lyon-1, Lyon</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Institute of Music Physiology and Musicians' Medicine, University of Music, Drama and Media, Hannover, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute of Music Physiology and Musicians' Medicine, University of Music, Drama and Media, Hannover</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Tillmann, Barbara" sort="Tillmann, Barbara" uniqKey="Tillmann B" first="Barbara" last="Tillmann">Barbara Tillmann</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CNRS-UMR 5292, INSERM U1028, University Claude Bernard Lyon-1, Lyon, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CNRS-UMR 5292, INSERM U1028, University Claude Bernard Lyon-1, Lyon</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The human brain is able to predict the sensory effects of its actions. But how precise are these predictions? The present research proposes a tool to measure thresholds between a simple action (keystroke) and a resulting sound. On each trial, participants were required to press a key. Upon each keystroke, a woodblock sound was presented. In some trials, the sound came immediately with the downward keystroke; at other times, it was delayed by a varying amount of time. Participants were asked to verbally report whether the sound came immediately or was delayed. Participants' delay detection thresholds (in msec) were measured with a staircase-like procedure. We hypothesised that musicians would have a lower threshold than non-musicians. Comparing pianists and brass players, we furthermore hypothesised that, as a result of a sharper attack of the timbre of their instrument, pianists might have lower thresholds than brass players. Our results show that non-musicians exhibited higher thresholds for delay detection (180±104 ms) than the two groups of musicians (102±65 ms), but there were no differences between pianists and brass players. The variance in delay detection thresholds could be explained by variance in sensorimotor synchronisation capacities as well as variance in a purely auditory temporal irregularity detection measure. This suggests that the brain's capacity to generate temporal predictions of sensory consequences can be decomposed into general temporal prediction capacities together with auditory-motor coupling. These findings indicate that the brain has a relatively large window of integration within which an action and its resulting effect are judged as simultaneous. Furthermore, musical expertise may narrow this window down, potentially due to a more refined temporal prediction. This novel paradigm provides a simple test to estimate the temporal precision of auditory-motor action-effect coupling, and the paradigm can readily be incorporated in studies investigating both healthy and patient populations.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Blakemore, Sj" uniqKey="Blakemore S">SJ Blakemore</name>
</author>
<author>
<name sortKey="Rees, G" uniqKey="Rees G">G Rees</name>
</author>
<author>
<name sortKey="Frith, Cd" uniqKey="Frith C">CD Frith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blakemore, Sj" uniqKey="Blakemore S">SJ Blakemore</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
<author>
<name sortKey="Frith, Cd" uniqKey="Frith C">CD Frith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eliades, Sj" uniqKey="Eliades S">SJ Eliades</name>
</author>
<author>
<name sortKey="Wang, X" uniqKey="Wang X">X Wang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martikainen, Mh" uniqKey="Martikainen M">MH Martikainen</name>
</author>
<author>
<name sortKey="Kaneko, K" uniqKey="Kaneko K">K Kaneko</name>
</author>
<author>
<name sortKey="Hari, R" uniqKey="Hari R">R Hari</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aliu, So" uniqKey="Aliu S">SO Aliu</name>
</author>
<author>
<name sortKey="Houde, Jf" uniqKey="Houde J">JF Houde</name>
</author>
<author>
<name sortKey="Nagarajan, Ss" uniqKey="Nagarajan S">SS Nagarajan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fujisaki, W" uniqKey="Fujisaki W">W Fujisaki</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S Shimojo</name>
</author>
<author>
<name sortKey="Kashino, M" uniqKey="Kashino M">M Kashino</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuling, Ia" uniqKey="Kuling I">IA Kuling</name>
</author>
<author>
<name sortKey="Van Eijk, Rlj" uniqKey="Van Eijk R">RLJ van Eijk</name>
</author>
<author>
<name sortKey="Juola, Jf" uniqKey="Juola J">JF Juola</name>
</author>
<author>
<name sortKey="Kohlrausch, A" uniqKey="Kohlrausch A">A Kohlrausch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tanaka, A" uniqKey="Tanaka A">A Tanaka</name>
</author>
<author>
<name sortKey="Asakawa, K" uniqKey="Asakawa K">K Asakawa</name>
</author>
<author>
<name sortKey="Imai, H" uniqKey="Imai H">H Imai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yamamoto, S" uniqKey="Yamamoto S">S Yamamoto</name>
</author>
<author>
<name sortKey="Miyazaki, M" uniqKey="Miyazaki M">M Miyazaki</name>
</author>
<author>
<name sortKey="Iwano, T" uniqKey="Iwano T">T Iwano</name>
</author>
<author>
<name sortKey="Kitazawa, S" uniqKey="Kitazawa S">S Kitazawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Freeman, Ed" uniqKey="Freeman E">ED Freeman</name>
</author>
<author>
<name sortKey="Ipser, A" uniqKey="Ipser A">A Ipser</name>
</author>
<author>
<name sortKey="Palmbaha, A" uniqKey="Palmbaha A">A Palmbaha</name>
</author>
<author>
<name sortKey="Paunoiu, D" uniqKey="Paunoiu D">D Paunoiu</name>
</author>
<author>
<name sortKey="Brown, P" uniqKey="Brown P">P Brown</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keetels, M" uniqKey="Keetels M">M Keetels</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J Vroomen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rohde, M" uniqKey="Rohde M">M Rohde</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sugano, Y" uniqKey="Sugano Y">Y Sugano</name>
</author>
<author>
<name sortKey="Keetels, M" uniqKey="Keetels M">M Keetels</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J Vroomen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yamamoto, K" uniqKey="Yamamoto K">K Yamamoto</name>
</author>
<author>
<name sortKey="Kawabata, H" uniqKey="Kawabata H">H Kawabata</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Exner, S" uniqKey="Exner S">S Exner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ben Artzi, E" uniqKey="Ben Artzi E">E Ben-Artzi</name>
</author>
<author>
<name sortKey="Fostick, L" uniqKey="Fostick L">L Fostick</name>
</author>
<author>
<name sortKey="Babkoff, H" uniqKey="Babkoff H">H Babkoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fostick, L" uniqKey="Fostick L">L Fostick</name>
</author>
<author>
<name sortKey="Babkoff, H" uniqKey="Babkoff H">H Babkoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Szymaszek, A" uniqKey="Szymaszek A">A Szymaszek</name>
</author>
<author>
<name sortKey="Szelag, E" uniqKey="Szelag E">E Szelag</name>
</author>
<author>
<name sortKey="Sliwowska, M" uniqKey="Sliwowska M">M Sliwowska</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hirsh, Ij" uniqKey="Hirsh I">IJ Hirsh</name>
</author>
<author>
<name sortKey="Sherrick, Ce" uniqKey="Sherrick C">CE Sherrick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zampini, M" uniqKey="Zampini M">M Zampini</name>
</author>
<author>
<name sortKey="Shore, Di" uniqKey="Shore D">DI Shore</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frissen, I" uniqKey="Frissen I">I Frissen</name>
</author>
<author>
<name sortKey="Ziat, M" uniqKey="Ziat M">M Ziat</name>
</author>
<author>
<name sortKey="Campion, G" uniqKey="Campion G">G Campion</name>
</author>
<author>
<name sortKey="Hayward, V" uniqKey="Hayward V">V Hayward</name>
</author>
<author>
<name sortKey="Guastavino, C" uniqKey="Guastavino C">C Guastavino</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Garcia Perez, Ma" uniqKey="Garcia Perez M">MA García-Pérez</name>
</author>
<author>
<name sortKey="Alcala Quintana, R" uniqKey="Alcala Quintana R">R Alcalá-Quintana</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiss, K" uniqKey="Weiss K">K Weiss</name>
</author>
<author>
<name sortKey="Scharlau, I" uniqKey="Scharlau I">I Scharlau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vatakis, A" uniqKey="Vatakis A">A Vatakis</name>
</author>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J Navarra</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Donohue, Se" uniqKey="Donohue S">SE Donohue</name>
</author>
<author>
<name sortKey="Woldorff, Mg" uniqKey="Woldorff M">MG Woldorff</name>
</author>
<author>
<name sortKey="Mitroff, Sr" uniqKey="Mitroff S">SR Mitroff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gates, A" uniqKey="Gates A">A Gates</name>
</author>
<author>
<name sortKey="Bradshaw, Jl" uniqKey="Bradshaw J">JL Bradshaw</name>
</author>
<author>
<name sortKey="Nettleton, Nc" uniqKey="Nettleton N">NC Nettleton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pfordresher, P" uniqKey="Pfordresher P">P Pfordresher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pfordresher, P" uniqKey="Pfordresher P">P Pfordresher</name>
</author>
<author>
<name sortKey="Palmer, C" uniqKey="Palmer C">C Palmer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stuart, A" uniqKey="Stuart A">A Stuart</name>
</author>
<author>
<name sortKey="Kalinowski, J" uniqKey="Kalinowski J">J Kalinowski</name>
</author>
<author>
<name sortKey="Rastatter, Mp" uniqKey="Rastatter M">MP Rastatter</name>
</author>
<author>
<name sortKey="Lynch, K" uniqKey="Lynch K">K Lynch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yates, Aj" uniqKey="Yates A">AJ Yates</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kaspar, K" uniqKey="Kaspar K">K Kaspar</name>
</author>
<author>
<name sortKey="Rubeling, H" uniqKey="Rubeling H">H Rübeling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Swink, S" uniqKey="Swink S">S Swink</name>
</author>
<author>
<name sortKey="Stuart, A" uniqKey="Stuart A">A Stuart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Repp, Bh" uniqKey="Repp B">BH Repp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Repp, Bh" uniqKey="Repp B">BH Repp</name>
</author>
<author>
<name sortKey="Su, Y H" uniqKey="Su Y">Y-H Su</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tillmann, B" uniqKey="Tillmann B">B Tillmann</name>
</author>
<author>
<name sortKey="Stevens, C" uniqKey="Stevens C">C Stevens</name>
</author>
<author>
<name sortKey="Keller, Pe" uniqKey="Keller P">PE Keller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wing, Am" uniqKey="Wing A">AM Wing</name>
</author>
<author>
<name sortKey="Kristofferson, Ab" uniqKey="Kristofferson A">AB Kristofferson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ehrle, N" uniqKey="Ehrle N">N Ehrlé</name>
</author>
<author>
<name sortKey="Samson, S" uniqKey="Samson S">S Samson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yee, W" uniqKey="Yee W">W Yee</name>
</author>
<author>
<name sortKey="Holleran, S" uniqKey="Holleran S">S Holleran</name>
</author>
<author>
<name sortKey="Jones, Mr" uniqKey="Jones M">MR Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aschersleben, G" uniqKey="Aschersleben G">G Aschersleben</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Repp, Bh" uniqKey="Repp B">BH Repp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hyde, Kl" uniqKey="Hyde K">KL Hyde</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Green, Dm" uniqKey="Green D">DM Green</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, X" uniqKey="Gu X">X Gu</name>
</author>
<author>
<name sortKey="Green, Dm" uniqKey="Green D">DM Green</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saberi, K" uniqKey="Saberi K">K Saberi</name>
</author>
<author>
<name sortKey="Green, Dm" uniqKey="Green D">DM Green</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leek, Mr" uniqKey="Leek M">MR Leek</name>
</author>
<author>
<name sortKey="Dubno, Jr" uniqKey="Dubno J">JR Dubno</name>
</author>
<author>
<name sortKey="He, N" uniqKey="He N">N He</name>
</author>
<author>
<name sortKey="Ahlstrom, Jb" uniqKey="Ahlstrom J">JB Ahlstrom</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Drewing, K" uniqKey="Drewing K">K Drewing</name>
</author>
<author>
<name sortKey="Stenneken, P" uniqKey="Stenneken P">P Stenneken</name>
</author>
<author>
<name sortKey="Cole, J" uniqKey="Cole J">J Cole</name>
</author>
<author>
<name sortKey="Prinz, W" uniqKey="Prinz W">W Prinz</name>
</author>
<author>
<name sortKey="Aschersleben, G" uniqKey="Aschersleben G">G Aschersleben</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Helmuth, Ll" uniqKey="Helmuth L">LL Helmuth</name>
</author>
<author>
<name sortKey="Ivry, Rb" uniqKey="Ivry R">RB Ivry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keele, Sw" uniqKey="Keele S">SW Keele</name>
</author>
<author>
<name sortKey="Pokorny, Ra" uniqKey="Pokorny R">RA Pokorny</name>
</author>
<author>
<name sortKey="Corcos, Dm" uniqKey="Corcos D">DM Corcos</name>
</author>
<author>
<name sortKey="Ivry, R" uniqKey="Ivry R">R Ivry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bakeman, R" uniqKey="Bakeman R">R Bakeman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friston, K" uniqKey="Friston K">K Friston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stevenson, Ra" uniqKey="Stevenson R">RA Stevenson</name>
</author>
<author>
<name sortKey="Wallace, Mt" uniqKey="Wallace M">MT Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N Kraus</name>
</author>
<author>
<name sortKey="Chandrasekaran, B" uniqKey="Chandrasekaran B">B Chandrasekaran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gaser, C" uniqKey="Gaser C">C Gaser</name>
</author>
<author>
<name sortKey="Schlaug, G" uniqKey="Schlaug G">G Schlaug</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Herholz, Sc" uniqKey="Herholz S">SC Herholz</name>
</author>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D Alais</name>
</author>
<author>
<name sortKey="Cass, J" uniqKey="Cass J">J Cass</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Powers, Ar" uniqKey="Powers A">AR Powers</name>
</author>
<author>
<name sortKey="Hillock, Ar" uniqKey="Hillock A">AR Hillock</name>
</author>
<author>
<name sortKey="Wallace, Mt" uniqKey="Wallace M">MT Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haggard, P" uniqKey="Haggard P">P Haggard</name>
</author>
<author>
<name sortKey="Clark, S" uniqKey="Clark S">S Clark</name>
</author>
<author>
<name sortKey="Kalogeras, J" uniqKey="Kalogeras J">J Kalogeras</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24498299</article-id>
<article-id pub-id-type="pmc">3911931</article-id>
<article-id pub-id-type="publisher-id">PONE-D-13-42681</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0087176</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Neuroscience</subject>
<subj-group>
<subject>Motor Reactions</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Neurophysiology</subject>
<subj-group>
<subject>Motor Systems</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Psychoacoustics</subject>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sensory Systems</subject>
<subj-group>
<subject>Auditory System</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Behavioral Neuroscience</subject>
<subject>Motor Systems</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Thresholds of Auditory-Motor Coupling Measured with a Simple Task in Musicians and Non-Musicians: Was the Sound Simultaneous to the Key Press?</article-title>
<alt-title alt-title-type="running-head">Keystroke-Sound Simultaneity Thresholds</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>van Vugt</surname>
<given-names>Floris T.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Tillmann</surname>
<given-names>Barbara</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CNRS-UMR 5292, INSERM U1028, University Claude Bernard Lyon-1, Lyon, France</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Institute of Music Physiology and Musicians' Medicine, University of Music, Drama and Media, Hannover, Germany</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Larson</surname>
<given-names>Charles R.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>Northwestern University, United States of America</addr-line>
</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>f.t.vanvugt@gmail.com</email>
</corresp>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con">
<p>Conceived and designed the experiments: FTVV BT. Performed the experiments: FTVV. Analyzed the data: FTVV. Wrote the paper: FTVV BT.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>3</day>
<month>2</month>
<year>2014</year>
</pub-date>
<volume>9</volume>
<issue>2</issue>
<elocation-id>e87176</elocation-id>
<history>
<date date-type="received">
<day>18</day>
<month>10</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>12</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-year>2014</copyright-year>
<copyright-holder>van Vugt, Tillmann</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>The human brain is able to predict the sensory effects of its actions. But how precise are these predictions? The present research proposes a tool to measure thresholds between a simple action (keystroke) and a resulting sound. On each trial, participants were required to press a key. Upon each keystroke, a woodblock sound was presented. In some trials, the sound came immediately with the downward keystroke; at other times, it was delayed by a varying amount of time. Participants were asked to verbally report whether the sound came immediately or was delayed. Participants' delay detection thresholds (in msec) were measured with a staircase-like procedure. We hypothesised that musicians would have a lower threshold than non-musicians. Comparing pianists and brass players, we furthermore hypothesised that, as a result of a sharper attack of the timbre of their instrument, pianists might have lower thresholds than brass players. Our results show that non-musicians exhibited higher thresholds for delay detection (180±104 ms) than the two groups of musicians (102±65 ms), but there were no differences between pianists and brass players. The variance in delay detection thresholds could be explained by variance in sensorimotor synchronisation capacities as well as variance in a purely auditory temporal irregularity detection measure. This suggests that the brain's capacity to generate temporal predictions of sensory consequences can be decomposed into general temporal prediction capacities together with auditory-motor coupling. These findings indicate that the brain has a relatively large window of integration within which an action and its resulting effect are judged as simultaneous. Furthermore, musical expertise may narrow this window down, potentially due to a more refined temporal prediction. This novel paradigm provides a simple test to estimate the temporal precision of auditory-motor action-effect coupling, and the paradigm can readily be incorporated in studies investigating both healthy and patient populations.</p>
</abstract>
<funding-group>
<funding-statement>This work was supported by the EBRAMUS, European Brain and Music ITN Grant (ITN MC FP7, GA 238157). The team “Auditory cognition and psychoacoustics” is part of the LabEx CeLyA (“Centre Lyonnais d'Acoustique”, ANR-10-LABX-60). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<page-count count="8"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Many motor actions have sensory consequences. For example, we see our hands displace when we move them, and our steps make sounds. The human brain is able to predict the sensory effects of its actions
<xref rid="pone.0087176-Blakemore1" ref-type="bibr">[1]</xref>
<xref rid="pone.0087176-Eliades1" ref-type="bibr">[3]</xref>
. These predictions are crucial for distinguishing between sensory information that is generated by oneself and sensory information coming from outside. In particular, self-produced sensory effects are suppressed in comparison with externally produced effects
<xref rid="pone.0087176-Martikainen1" ref-type="bibr">[4]</xref>
.</p>
<p>The brain is able to predict not only
<italic>what</italic>
sensory event will follow its action, but also
<italic>when</italic>
it is supposed to occur. This is evident from the observation that self-produced sensory effects are no longer suppressed when they are delayed by several hundreds of milliseconds
<xref rid="pone.0087176-Aliu1" ref-type="bibr">[5]</xref>
. Furthermore, the temporal prediction is not fixed, but adaptive to the situation. For example, the point of subjective synchrony (PSS) between various sensory events can be recalibrated, even to the extent that the physical order of events can be inverted
<xref rid="pone.0087176-Fujisaki1" ref-type="bibr">[6]</xref>
<xref rid="pone.0087176-Yamamoto1" ref-type="bibr">[9]</xref>
. Lesions may affect subjective synchrony as shown by the intriguing case of a man who hears people speak before their lips move
<xref rid="pone.0087176-Freeman1" ref-type="bibr">[10]</xref>
. Synchrony can also be recalibrated between sensory and (active) motor events
<xref rid="pone.0087176-Keetels1" ref-type="bibr">[11]</xref>
<xref rid="pone.0087176-Yamamoto2" ref-type="bibr">[14]</xref>
.</p>
<p>But how precise are these predictions and the perceived synchrony between pairs of sensory events, or motor and sensory events? Common experimental paradigms to measure this precision are asking participants to either judge whether two stimuli are simultaneous (simultaneity judgement task - SJ) or to report the order of two stimuli (temporal order judgement - TOJ). For the temporal order judgement task (TOJ), precision is measured as the just-noticeable difference (JND) between two potential orderings. Asynchrony detection thresholds vary according to the sensory modalities that are tested. For instance, humans can distinguish two auditory clicks presented to the same ear when they are separated by 2 msec, but at least 60 msec are needed to distinguish them binaurally
<xref rid="pone.0087176-Exner1" ref-type="bibr">[15]</xref>
. Typical thresholds for TOJ between two auditory stimuli are inter-stimulus-intervals (ISIs) of 20 to 60 msec, probably depending on the stimulus type
<xref rid="pone.0087176-BenArtzi1" ref-type="bibr">[16]</xref>
<xref rid="pone.0087176-Szymaszek1" ref-type="bibr">[18]</xref>
. Thresholds for TOJ between auditory (tone) and visual (flash) stimuli are typically between 25 and 50 msec
<xref rid="pone.0087176-Hirsh1" ref-type="bibr">[19]</xref>
,
<xref rid="pone.0087176-Zampini1" ref-type="bibr">[20]</xref>
. Auditory-haptic thresholds usually have JNDs of 100 msec, and haptic-haptic thresholds have JNDs of around 50 msec
<xref rid="pone.0087176-Frissen1" ref-type="bibr">[21]</xref>
. Although the SJ and TOJ tasks often give different results, thresholds for the SJ task tend to be smaller than those for the TOJ task
<xref rid="pone.0087176-GarcaPrez1" ref-type="bibr">[22]</xref>
,
<xref rid="pone.0087176-Weiss1" ref-type="bibr">[23]</xref>
. This led to the dominant view that the SJ and TOJ tasks probably measure different underlying processes
<xref rid="pone.0087176-Vatakis1" ref-type="bibr">[24]</xref>
. Furthermore, training plays a role in shaping sensitivities, as is shown by video game players having smaller thresholds for audio-visual simultaneity judgements than non-players
<xref rid="pone.0087176-Donohue1" ref-type="bibr">[25]</xref>
.</p>
<p>It remains unclear how sensitive participants are to the synchrony between events that they actively produce (such as keystrokes) and their sensory consequences (such as tones). Previously, this question has been studied by investigating the effects of altered sensory feedback to a produced action. For instance, musicians' timing performance was measured when they played on a piano that emitted the played sounds with a delay. Large delays (such as 200 msec) are noticeable and disrupt the fluidity of performance
<xref rid="pone.0087176-Gates1" ref-type="bibr">[26]</xref>
<xref rid="pone.0087176-Pfordresher2" ref-type="bibr">[28]</xref>
. Speakers' fluency is similarly affected when auditory feedback is delayed
<xref rid="pone.0087176-Stuart1" ref-type="bibr">[29]</xref>
<xref rid="pone.0087176-Swink1" ref-type="bibr">[32]</xref>
. In order to be able to assess quantitatively whether disruptions in auditory feedback are noticeable and to investigate the effect of training and expertise, there is a need for an experimental paradigm that can establish thresholds for action-effect synchrony judgements.</p>
<p>The present research proposes a new tool to measure thresholds between a simple action and an emitted sound. In this task, participants are asked on each trial to press a key. Either immediately or after a predetermined duration has elapsed, a sound is presented through participants' headphones. Our aim was to measure the thresholds for detecting a delay between the keystroke and the sound, and to investigate the effect of expertise. In addition, our aim was to establish how this action-effect synchrony sensitivity relates to other auditory and auditory-motor capacities. To this end, our participants also performed, firstly, an auditory temporal deviant detection task, and secondly, a sensorimotor synchronisation task. That is, we measured how well they could synchronise their movements to an external stimulus
<xref rid="pone.0087176-Repp1" ref-type="bibr">[33]</xref>
,
<xref rid="pone.0087176-Repp2" ref-type="bibr">[34]</xref>
. For this, we used a variation of the synchronisation-continuation tapping paradigm
<xref rid="pone.0087176-Tillmann1" ref-type="bibr">[35]</xref>
,
<xref rid="pone.0087176-Wing1" ref-type="bibr">[36]</xref>
. All tasks were performed by non-musicians and by pianist and brass player musicians. It has been previously reported that musicians outperform non-musicians in terms of improved auditory discrimination
<xref rid="pone.0087176-Ehrl1" ref-type="bibr">[37]</xref>
,
<xref rid="pone.0087176-Yee1" ref-type="bibr">[38]</xref>
, and by tapping closer to the beat and more precisely
<xref rid="pone.0087176-Aschersleben1" ref-type="bibr">[39]</xref>
,
<xref rid="pone.0087176-Repp3" ref-type="bibr">[40]</xref>
. We further hypothesised that the relation between finger movements and sounds for the musicians' main instrument might influence the delay detection thresholds too: when pianists strike a key the sound is instantaneous, whereas brass players' sound onset is determined by their respiration. Also, the piano sound has a sharper onset than the brass sound. As a result, we expected that pianists would have lower thresholds than brass players. We had also considered singers as an alternative to the brass players, but found that they tend to have a large amount of piano training as their secondary instrument, which would be a confound for the comparison with pianists.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2a">
<title>Ethics statement</title>
<p>The experiment was approved by the ethics committee of the University of Music, Drama and Media and were in line with the declaration of Helsinki. Participants provided informed written consent.</p>
</sec>
<sec id="s2b">
<title>Participants</title>
<p>We recruited two groups of musician participants from the student pool at the Hanover Music University and young professionals. We furthermore recruited non-musicians in the same age range.
<xref ref-type="table" rid="pone-0087176-t001">Table 1</xref>
lists biographical and questionnaire data of each group. Participants reported no hearing impairment or neurological disorder, were aged between 18 and 40 years and right-handed. The musician participants were recruited in two groups: one group whose primary instrument was the piano (or who were professional pianists) and another group with a main instrument from the brass family (e.g., trumpet, trombone or tuba). A further criterion for inclusion in the non-musician group was having received less than 1 year of musical training (apart from obligatory courses in primary or secondary schools).</p>
<table-wrap id="pone-0087176-t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0087176.t001</object-id>
<label>Table 1</label>
<caption>
<title>Basic information about the three groups of participants.</title>
</caption>
<alternatives>
<graphic id="pone-0087176-t001-1" xlink:href="pone.0087176.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pianists</td>
<td align="left" rowspan="1" colspan="1">Brass</td>
<td align="left" rowspan="1" colspan="1">Nonmusicians</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">N</td>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Gender (female/male)</td>
<td align="left" rowspan="1" colspan="1">10/10</td>
<td align="left" rowspan="1" colspan="1">7/11</td>
<td align="left" rowspan="1" colspan="1">8/10</td>
<td align="left" rowspan="1" colspan="1">χ
<sup>2</sup>
(2) = .47, p = .79</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Age (years)</td>
<td align="left" rowspan="1" colspan="1">26.1 (5.7)</td>
<td align="left" rowspan="1" colspan="1">24.9 (3.5)</td>
<td align="left" rowspan="1" colspan="1">26.2 (4.7)</td>
<td align="left" rowspan="1" colspan="1">Kruskal-Wallis χ
<sup>2</sup>
(2) = 1.07, p = .59</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Handedness (Handedness Quotient in %)</td>
<td align="left" rowspan="1" colspan="1">73.4 (19.9)</td>
<td align="left" rowspan="1" colspan="1">75.3 (16.5)</td>
<td align="left" rowspan="1" colspan="1">78.1 (20.0)</td>
<td align="left" rowspan="1" colspan="1">Kruskal-Wallis χ
<sup>2</sup>
(2) = 1.10, p = .58</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Capable of blind typing (number of participants in each of the following categories: 10 fingers/less than 10 fingers/none)</td>
<td align="left" rowspan="1" colspan="1">2/14/4</td>
<td align="left" rowspan="1" colspan="1">7/10/1</td>
<td align="left" rowspan="1" colspan="1">5/10/3</td>
<td align="left" rowspan="1" colspan="1">Kruskal-Wallis χ
<sup>2</sup>
(2) = 4.67, p = .10</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Video game use in hours per week (number of participants in each of the following categories: none/<1 h/1–7 h/>7 h)</td>
<td align="left" rowspan="1" colspan="1">16/3/1/0</td>
<td align="left" rowspan="1" colspan="1">10/7/1/0</td>
<td align="left" rowspan="1" colspan="1">13/1/3/1</td>
<td align="left" rowspan="1" colspan="1">Kruskal-Wallis χ
<sup>2</sup>
(2) = 2.09, p = .35</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Use of computer keyboards in hours per day (number of participants in each of the following categories: <1 h/1–2 h/>2 h)</td>
<td align="left" rowspan="1" colspan="1">10/8/2</td>
<td align="left" rowspan="1" colspan="1">7/9/2</td>
<td align="left" rowspan="1" colspan="1">5/2/11</td>
<td align="left" rowspan="1" colspan="1">Kruskal-Wallis χ
<sup>2</sup>
(2) = 7.84, p = .019*</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Capacity in using computer keyboards (self-rated 1–10)</td>
<td align="left" rowspan="1" colspan="1">6.3 (1.6)</td>
<td align="left" rowspan="1" colspan="1">6.6 (1.9)</td>
<td align="left" rowspan="1" colspan="1">6.7 (2.1)</td>
<td align="left" rowspan="1" colspan="1">Kruskal-Wallis χ
<sup>2</sup>
(2) = 0.87, p = .65</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Use of text messaging on cell phone in hours per week (number of participants in each of the following categories: none/<1 h/1–7 h/>7 h)</td>
<td align="left" rowspan="1" colspan="1">7/9/4</td>
<td align="left" rowspan="1" colspan="1">4/12/2</td>
<td align="left" rowspan="1" colspan="1">10/4/4</td>
<td align="left" rowspan="1" colspan="1">Kruskal-Wallis χ
<sup>2</sup>
(2) = 1.42, p = .49</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Capacity in using text messaging (self-rated 1–10)</td>
<td align="left" rowspan="1" colspan="1">7.3 (2.0)</td>
<td align="left" rowspan="1" colspan="1">6.8 (1.9)</td>
<td align="left" rowspan="1" colspan="1">6.2 (2.3)</td>
<td align="left" rowspan="1" colspan="1">Kruskal-Wallis χ
<sup>2</sup>
(2) = 2.41, p = .30</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Age of onset of musical training (years)</td>
<td align="left" rowspan="1" colspan="1">6.65 (2.2)</td>
<td align="left" rowspan="1" colspan="1">9.78 (3.1)</td>
<td align="left" rowspan="1" colspan="1">NA</td>
<td align="left" rowspan="1" colspan="1">t(30.5) = −3.56, p = .001**</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Accumulated practice time on principal instrument (×10,000 hours)</td>
<td align="left" rowspan="1" colspan="1">22.6 (10.5)</td>
<td align="left" rowspan="1" colspan="1">13.1 (8.1)</td>
<td align="left" rowspan="1" colspan="1">NA</td>
<td align="left" rowspan="1" colspan="1">t(35.3) = 3.15, p = .003**</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Years of musical practice</td>
<td align="left" rowspan="1" colspan="1">19.5 (5.6)</td>
<td align="left" rowspan="1" colspan="1">15.1 (3.6)</td>
<td align="left" rowspan="1" colspan="1">NA</td>
<td align="left" rowspan="1" colspan="1">t(32.6) = 2.90, p = .007**</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Current daily practice time (hours)</td>
<td align="left" rowspan="1" colspan="1">3.7 (2.2)</td>
<td align="left" rowspan="1" colspan="1">3.3 (1.8)</td>
<td align="left" rowspan="1" colspan="1">NA</td>
<td align="left" rowspan="1" colspan="1">t(35.6) = 0.68 p = .50</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Absolute hearing (yes/no; self-reported)</td>
<td align="left" rowspan="1" colspan="1">7/13</td>
<td align="left" rowspan="1" colspan="1">0/19</td>
<td align="left" rowspan="1" colspan="1">NA</td>
<td align="left" rowspan="1" colspan="1">Fisher Exact Test p = .009**</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>Data is reported as mean (SD) unless otherwise specified. Uncorrected significance is indicated: *p<.05, **p<.01, ***p<.001.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>Among the brass players, 13 had received piano instruction in the form of obligatory courses at the conservatory or in their childhood. For the entire brass group, the lifetime accumulated piano practice time was 1.1 (SD 1.8) thousand hours over an average total of 6.9 (SD 6.1) years.</p>
<p>Participants filled out a questionnaire with basic information such as age, handedness (according to the Edinburgh Handedness Inventory), and instrumental practice prior to their participation. Questionnaire results are reported in
<xref ref-type="table" rid="pone-0087176-t001">Table 1</xref>
. We found that basic biographical parameters did not differ except for computer keyboard use (Kruskal-Wallis χ
<sup>2</sup>
(2) = 7.84, p = .019 uncorr.), This effect indicated that the non-musician group reported they spent more time in a day using a computer keyboard than the pianists [Mann-Whitney U = 98.0, Z = 2.55, p = 0.01, r = 0.07] or brass players [Mann-Whitney U = 96.5, Z = 2.20, p = 0.03, r = 0.06]. The two musician groups did not differ in their use of computer keyboards [Mann-Whitney U = 161.0, Z = 0.61, p = 0.54, r = 0.02].</p>
</sec>
<sec id="s2c">
<title>Materials</title>
<sec id="s2c1">
<title>Keystroke-sound delay detection task</title>
<p>We used a USB keypad (Hama Slimline Keypad SK110) that interfaced through the HDI protocol with a python script. This script detected keystroke onsets and played a woodblock wave sound (duration: 63 msec) after a predetermined duration through headphones (Shure SRH440). The woodblock sound was chosen because of its relatively sharp sound onset and nevertheless being pleasant to hear.</p>
</sec>
<sec id="s2c2">
<title>Anisochrony detection</title>
<p>We used a python-pygame graphical user interface that presented the sounds (using pyAudio) and the instructions. Instructions were given orally as well. The five-tone sequences were generated as follows (adapted from
<xref rid="pone.0087176-Ehrl1" ref-type="bibr">[37]</xref>
,
<xref rid="pone.0087176-Hyde1" ref-type="bibr">[41]</xref>
). The base sequence consisted of five isochronous sine wave tones of 100 ms presented with an inter-onset-interval (IOI) of 350 ms. In some trials, the fourth tone was delayed by a certain amount but the fifth tone was always on time
<xref rid="pone.0087176-Ehrl1" ref-type="bibr">[37]</xref>
,
<xref rid="pone.0087176-Hyde1" ref-type="bibr">[41]</xref>
. That is, when the tone was delayed by an amount
<italic>d</italic>
, the third interval was longer by
<italic>d</italic>
msec and the fourth interval was shorter by
<italic>d</italic>
msec.</p>
</sec>
<sec id="s2c3">
<title>Synchronisation-Continuation Tapping</title>
<p>The synchronisation stimulus was generated offline as follows and saved to a wave file. First, we presented 4 finger snap sounds with an inter-onset-interval of 300 msec. Then 30 instances of the woodblock sound (the same as used during the learning part of the experiment) followed with an inter-onset-interval (IOI) of 600 msec. This was followed by a silence of 30*600 msec (the equivalent of 30 more taps). Finally, a high-pitched gong sound was used to signal the end of the trial. The sounds were played using a custom developed python experimental script, which also communicated through a HID-USB interface with the button box to register the responses.</p>
<p>Participants' finger taps were recorded using a custom tapping surface containing a (piezo-based) contact sensor that communicated with the computer through the serial interface and was captured in a python program that also presented the stimuli using pyAudio.</p>
</sec>
</sec>
<sec id="s2d">
<title>Procedure</title>
<sec id="s2d1">
<title>Keystroke-sound delay detection task</title>
<p>In the delay detection task, we measured participants' sensitivity to delays between motor (keystroke) and auditory (tone) events. That is, we established from which delay onwards participants noticed that the tone came after the keystroke instead of immediately. At each trial, the participant pressed the “zero” key on the keypad at a time of her/his choosing and heard a tone. This tone was either played at the same time of the keystroke or temporally delayed. The participants responded verbally whether or not they had the feeling that the tone was delayed. Their responses were entered in the computer by the experimenter. Crucially, participants were instructed to leave their finger on the key (instead of lifting it prior to the keystroke) so as to reduce the tactile timing information. Furthermore, they were required to keep their eyes closed during the keystroke.</p>
<p>We used the Maximum Likelihood Procedure (MLP) algorithm
<xref rid="pone.0087176-Green1" ref-type="bibr">[42]</xref>
<xref rid="pone.0087176-Leek1" ref-type="bibr">[45]</xref>
to establish the threshold for the detection of the asynchrony between movement (keypress) and the tone. The algorithm is designed to adaptively select the stimulus level (tone delay) on each trial so as to converge to the participants' threshold. For each block, the algorithm outputs an estimate for the participant's threshold.</p>
<p>The MLP algorithm briefly works as follows. Participants' probability of responding “delayed” to a particular stimulus (i.e. keystroke-sound delay) is modeled by sigmoid psychometric curves that take stimulus level (amount of delay in msec) as a variable. The equation for the psychometric curves was p(response delayed) = a+(1−a)*(1/(1+exp(−k*(x−m)))), where a is the false alarm rate (see below), k is a parameter controlling the slope, m is the midpoint of the psychometric curve (in msec) and x is the amount of delay (in msec). A set of candidate psychometric curves is maintained in parallel and for each curve, the likelihood of the set of the participants' responses is calculated. The psychometric curve that makes the participant's responses maximally likely is used to determine the stimulus level (the delay between the keystroke and the sound) on the next trial. We used 600 candidate psychometric curves with midpoints linearly spread between 0 and 600 ms delay and combined these with the five false alarm rates (0%,10%,20%,30%,40%). Hence, a total of 3000 candidate psychometric curves were used.</p>
<p>Participants first performed 4 trials (2 with no delay and 2 with a delay of 600 ms) to make clear the difference between when the sound came immediately and when it was delayed. The participant received accuracy feedback about her answers during these practice trials. Next, they performed a block of 10 trials, starting at a 600 ms keystroke-sound delay but then using MLP to determine the stimulus levels of the following trials. If the procedure was clear, we continued with 3 experimental blocks of 36 trials. Each experimental block consisted of 36 trials containing 6 catch trials. Catch trials are trials on which the delay was always 0 msec (regardless of the delay that was suggested by the MLP algorithm). The function of catch trials is to prevent participants from always responding “delayed” (which would cause the MLP algorithm converge to a zero threshold). Catch trials were inserted randomly with the following constraints: the first 12 trials contained 2 catch trials and the next 24 trials contained 4 catch trials.</p>
<p>The maximum likelihood procedure was implemented in python. We made our source code freely available online on
<ext-link ext-link-type="uri" xlink:href="https://github.com/florisvanvugt/PythonMLP">https://github.com/florisvanvugt/PythonMLP</ext-link>
. The source code for the delay detection paradigm is furthermore available upon request (to the corresponding author).</p>
</sec>
<sec id="s2d2">
<title>Anisochrony detection</title>
<p>Participants were seated comfortably and on each trial heard a sequence of five tones (see materials). Participants' task was to respond whether the five-tone sequence was regular or not by pressing one of two response keys on the laptop keyboard. Stimuli (see materials) were presented through headphones set to a comfortable sound level that was kept constant across all participants. The participant's threshold was established adaptively using the MLP procedure. The basic procedure was the same as for the delay detection task, but here the set of candidate psychometric curves was as follows. We defined 200 logistic psychophysical curves whose midpoints were linearly spread over the 0 to 200 ms delay range (0% to 57% of the tone IOI) and these were crossed with the five false alarm rates (0,10,20,30,40%). Again, each experimental block consisted of, first, 12 trials containing 2 catch trials, and then 24 trials containing 4 catch trials.</p>
<p>Instructions were presented orally and then written on the screen. Next, the interface presented the four example stimuli (two regular, two irregular). For these trials, participants received accuracy feedback. The first trial of the next block of 10 trials was set to a keystroke-sound 200 msec delay and then the adaptive procedure (MLP) was used to determine the stimulus level on the next trials. During this second training block, no accuracy feedback was provided. Finally, if the procedure was understood by the participants, three experimental blocks were administered. In between blocks, participants took a brief break of several minutes.</p>
</sec>
<sec id="s2d3">
<title>Synchronisation-Continuation Tapping</title>
<p>In each trial, participants tapped with their index finger on a flat surface along with the synchronisation stimulus after the four finger snap sounds (see materials). When the woodblock sounds stopped, participants were instructed to continue tapping at the same speed and regularity until the high-pitched sound signalled the end of the trial.</p>
</sec>
<sec id="s2d4">
<title>Data analyses</title>
<p>The threshold tasks were analysed as follows. First, we discarded blocks that contained more than 30% incorrect catch trial responses (in which the delay or deviation was 0 msec). Secondly, we discarded blocks in which the threshold estimate had not properly converged towards the end of the block. This was tested by fitting a regression line to the last 10 trials in the block, and discarding those blocks in which the slope of this line exceeded 2 msec/trial (for the delay detection task) or 1.18 msec/trial (for the anisochrony task). These slope cut-off points were chosen so as to, firstly, match visual inspection of blocks that had not properly converged, and secondly, to be roughly the same proportion of the average final threshold in the anisochrony and delay detection task. Thirdly, we computed the average threshold estimate for the remaining blocks for each participant.</p>
<p>Synchronisation tapping performance was analysed using linear and circular statistics. In the linear analysis, we calculated the time between each tap and its corresponding metronome click (in msec). For each block, we averaged these to yield the mean relative asynchrony (in msec) and calculated the standard deviation (SD) to yield the SD of the relative asynchrony (in msec). The mean relative asynchrony is a measure for how close participants tapped to the beat and the SD relative asynchrony is a measure of tapping precision (time-lock). In the circular analysis
<xref rid="pone.0087176-Fisher1" ref-type="bibr">[46]</xref>
, the timing of each tap was converted into a phase (between 0 and 2π) relative to the metronome onset. Based on these, we calculated the synchronisation vector, which is the average of all vectors with length 1 and the phase angle for that tap. The length of this vector (between 0 and 1) is a measure for the time-lock between the tap and the sound. We used Fisher's r-to-z transformation and fed the obtained z-scores into our parametric analysis.</p>
<p>For the continuation phase (when the metronome had stopped), we calculated the intervals between taps (inter-tap-interval, ITI, in ms) and its standard deviation (SD ITI in ms). We then de-trended the continuation taps by fitting a regression line to the ITIs over time, reporting the slope of this line and taking the residual variability from this line. In this way, we compensated for the fact that participants tend to speed up or slow down
<xref rid="pone.0087176-Drewing1" ref-type="bibr">[47]</xref>
<xref rid="pone.0087176-Keele1" ref-type="bibr">[49]</xref>
. The slope of this line fit indicated the tempo drift.</p>
<p>In order to compare performance of the three groups, we performed between-participants ANOVAs. We tested for homogeneity of variance using Levene's Test, and report where it was significant. We report generalised effect sizes η
<sub>G</sub>
<sup>2</sup>
<xref rid="pone.0087176-Bakeman1" ref-type="bibr">[50]</xref>
. Follow-up comparisons were calculated using Tukey's HSD method.</p>
<p>The data collected within the framework of this study are made available freely online (
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.6084/m9.figshare.878062">http://dx.doi.org/10.6084/m9.figshare.878062</ext-link>
).</p>
</sec>
</sec>
</sec>
<sec id="s3">
<title>Results</title>
<sec id="s3a">
<title>Delay Detection</title>
<p>We discarded 17.0% of all blocks because of catch trial errors, and a further 2.3% because of lack of threshold convergence. Four participants (2 pianists, 2 brass players) had no remaining blocks and were eliminated from further analyses. For the other participants, we calculated the average of the thresholds on the basis of the remaining 2.6 (SD 0.7) blocks.</p>
<p>The distribution of thresholds of all participants in all groups combined was significantly non-normal [Shapiro-Wilk normality test W = .86, p = .00003], and therefore we continued statistical analyses with log-transformed thresholds. These did not violate normality assumptions [Shapiro-Wilk W = .98, p = .71]. The main effect of group (pianist, brass, nonmusician) on delay detection threshold was significant [F(2,49) = 6.40, p = .003, η
<sub>G</sub>
<sup>2</sup>
 = .21]. Post-hoc Tukey HSD contrasts indicated that the non-musicians' threshold was higher than those of the pianists [p = .01] and than those of the brass players [p = .006]. The brass players and pianists' thresholds were not significantly different [p = .93] (
<xref ref-type="fig" rid="pone-0087176-g001">Figure 1A</xref>
). Among the brass players, we found that those who played piano as their second instrument (N = 11) had a lower delay detection threshold (M = 83.0, SD = 42.5) than those who did not (N = 5) (M = 116.2, SD = 70.0). However, this difference was not significant [t(7.3) = 1.09, p = .31]. Furthermore, the brass players that did not have piano as their second instrument (N = 5) did not show a higher delay detection threshold than pianists [t(7.8) = −.50, p = .63].</p>
<fig id="pone-0087176-g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0087176.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Thresholds for the keystroke-sound delay detection (A) and anisochrony (B) tasks.</title>
<p>The figures indicate the average thresholds for each of the groups (error bars indicate the standard error of the mean). *p<.05, **p<.01, ***p<.001.</p>
</caption>
<graphic xlink:href="pone.0087176.g001"></graphic>
</fig>
</sec>
<sec id="s3b">
<title>Anisochrony</title>
<p>We discarded 11.1% of all blocks because of catch trial errors, but no further blocks were discarded because all had properly converged. Two participants (1 brass, 1 pianist) had no blocks remaining (based on the first criterion) and were eliminated from further analyses. For the other participants, we averaged the remaining 2.7 (SD = 0.6) blocks into a single threshold value per participant.</p>
<p>The distribution of thresholds was significantly non-normal [Shapiro-Wilk normality test W = .92, p = .0009] and therefore we continued statistical analyses with log-transformed thresholds. These did not violate normality assumptions [Shapiro-Wilk W = .97, p = .20]. The main effect of group on anisochrony threshold was significant [F(2,51) = 21.60, p<.0001, η
<sub>G</sub>
<sup>2</sup>
 = .46]. Tukey HSD contrasts indicated that nonmusicians' thresholds were higher than those of the pianists [p<.001] and than those of the brass players [p<.0001]. The brass players' and pianists' thresholds did not differ significantly [p = .52] (
<xref ref-type="fig" rid="pone-0087176-g001">Figure 1B</xref>
). In the pianist group, there was one outlier who was further than 3 SD below the mean for that group, but removing this participant did not affect any of the results.</p>
</sec>
<sec id="s3c">
<title>Synchronisation-Continuation Tapping</title>
<p>We report basic measures of synchronisation and tapping variability in
<xref ref-type="table" rid="pone-0087176-t002">Table 2</xref>
. Tukey contrasts revealed that brass players and pianists do not differ in any of the measures (all p>.73) but contrasts between the non-musicians on the one hand and the pianist or brass groups on the other yielded significant or marginally significant differences (all p<.08) (
<xref ref-type="table" rid="pone-0087176-t002">Table 2</xref>
).</p>
<table-wrap id="pone-0087176-t002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0087176.t002</object-id>
<label>Table 2</label>
<caption>
<title>Synchronisation and continuation tapping results for the three groups.</title>
</caption>
<alternatives>
<graphic id="pone-0087176-t002-2" xlink:href="pone.0087176.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Pianists</td>
<td align="left" rowspan="1" colspan="1">Brass players</td>
<td align="left" rowspan="1" colspan="1">Nonmusicians</td>
<td align="left" rowspan="1" colspan="1">Between-groups comparison</td>
</tr>
</thead>
<tbody>
<tr>
<td colspan="5" align="left" rowspan="1">
<italic>Synchronisation phase</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Mean relative asynchrony (msec)</td>
<td align="left" rowspan="1" colspan="1">7.5 (21.2)</td>
<td align="left" rowspan="1" colspan="1">5.7 (28.3)</td>
<td align="left" rowspan="1" colspan="1">−22.5 (52.3)</td>
<td align="left" rowspan="1" colspan="1">F(2,44) = 3.35, p = .04*</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">SD relative asynchrony (msec)</td>
<td align="left" rowspan="1" colspan="1">19.9 (4.9)</td>
<td align="left" rowspan="1" colspan="1">19.3 (3.2)</td>
<td align="left" rowspan="1" colspan="1">37.4 (20.3)</td>
<td align="left" rowspan="1" colspan="1">F(2,44) = 11.7, p<.0001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Synchronisation vector length (r-bar, z-transformed)</td>
<td align="left" rowspan="1" colspan="1">2.3 (0.2)</td>
<td align="left" rowspan="1" colspan="1">2.3 (0.2)</td>
<td align="left" rowspan="1" colspan="1">1.8 (0.4)</td>
<td align="left" rowspan="1" colspan="1">F(2,44) = 20.85, p<.00001</td>
</tr>
<tr>
<td colspan="5" align="left" rowspan="1">
<italic>Continuation phase</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Continuation ITI (msec) (without detrending)</td>
<td align="left" rowspan="1" colspan="1">604 (10)</td>
<td align="left" rowspan="1" colspan="1">605 (11)</td>
<td align="left" rowspan="1" colspan="1">596 (20)</td>
<td align="left" rowspan="1" colspan="1">F(2,44) = 1.79, p = .18
<xref ref-type="table-fn" rid="nt103">+</xref>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Continuation SD ITI (msec) (without detrending)</td>
<td align="left" rowspan="1" colspan="1">17.9 (2.9)</td>
<td align="left" rowspan="1" colspan="1">19.6 (2.7)</td>
<td align="left" rowspan="1" colspan="1">31.4 (9.0)</td>
<td align="left" rowspan="1" colspan="1">F(2,44) = 27.04, p<.000001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Continuation drift (msec/sec)</td>
<td align="left" rowspan="1" colspan="1">−0.3 (0.6)</td>
<td align="left" rowspan="1" colspan="1">−0.4 (0.8)</td>
<td align="left" rowspan="1" colspan="1">−0.9 (1.0)</td>
<td align="left" rowspan="1" colspan="1">F(2,44) = 2.65, p = .08.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Continuation residual variability after detrending (CV %)</td>
<td align="left" rowspan="1" colspan="1">5.5 (0.9)</td>
<td align="left" rowspan="1" colspan="1">6.0 (0.8)</td>
<td align="left" rowspan="1" colspan="1">9.7 (2.9)</td>
<td align="left" rowspan="1" colspan="1">F(2,44) = 25.7, p<.00001, etasq = .54</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt102">
<label></label>
<p>Values are reported as mean (SD) unless otherwise specified.</p>
</fn>
<fn id="nt103">
<label>+</label>
<p>For the continuation ITI, Levene's test for homogeneity is violated.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="s3d">
<title>Comparisons between the tests</title>
<p>Participants' performances on the various tests reported here were not independent. Combining the thresholds from the three groups, the delay detection threshold correlated positively with the anisochrony task [Pearson ρ(49) = .60, p<.0001, R
<sub>adj</sub>
<sup>2</sup>
 = .35]. The delay detection threshold correlated negatively with the synchronisation vector length [Pearson ρ(49) = −.53, p<.0001, R
<sub>adj</sub>
<sup>2</sup>
 = .27].</p>
<p>To test whether these correlations differed statistically between the groups, and whether performance on the anisochrony and synchronisation tasks combined might explain more of the variance in delay detection than either of those two tasks alone, we performed the following analysis. Participants who had at least one valid anisochrony block and at least one valid delay detection block remaining (after discarding) entered in this analysis. This was the case for 17 pianists, 16 brass players and 18 non-musicians. We ran an ANCOVA model with log-transformed delay detection threshold as dependent variable, group (nonmusician, brass player or pianist) as categorical factor (between-participants) and log-transformed anisochrony threshold and sensorimotor synchronisation accuracy (vector length, r-bar) as covariates.</p>
<p>The interaction between anisochrony threshold and group was not significant [F(2,48) = 1.64, p = .21], which indicated that the linear relationship between the anisochrony and delay detection thresholds were not different between the groups. The interaction between synchronisation accuracy and group was not significant either [F(2,48) = .91, p = .41]. This means that the linear relationship between synchronisation accuracy and delay detection was not different between groups. The main effect of anisochrony threshold was significant [F(1,48) = 5.56, p = .02] as was the main effect of synchronisation accuracy [F(1,48) = 8.73, p = .004]. There was no main effect of group [F(2,48) = 1.06, p = .35]. These results were essentially the same when repeated without the participant with an outlier anisochrony threshold (
<xref ref-type="fig" rid="pone-0087176-g002">Figure 2</xref>
).</p>
<fig id="pone-0087176-g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0087176.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Correlations between keystroke-sound delay detection and anisochrony (A) and sensorimotor synchronisation accuracy (B).</title>
<p>The dot colour indicates the group: blue for non-musicians, red for pianists and green for brass players.</p>
</caption>
<graphic xlink:href="pone.0087176.g002"></graphic>
</fig>
<p>In sum, the anisochrony and synchronisation accuracy both significantly explained the variance in delay detection thresholds (
<xref ref-type="fig" rid="pone-0087176-g002">Figure 2</xref>
). Taken together, they explained more than either one factor alone. With these two predictors, the group (pianist, brass, nonmusician) factor did not explain additional variance, indicating that the musicianship effect on delay detection threshold was explained by anisochrony and synchronisation task performance.</p>
</sec>
</sec>
<sec id="s4">
<title>Discussion</title>
<p>The human brain predicts sensory effects of its motor actions
<xref rid="pone.0087176-Blakemore1" ref-type="bibr">[1]</xref>
,
<xref rid="pone.0087176-Friston1" ref-type="bibr">[51]</xref>
. Not only does the brain predict
<italic>what</italic>
effect will follow, but also
<italic>when</italic>
it is expected to occur
<xref rid="pone.0087176-Aliu1" ref-type="bibr">[5]</xref>
. The present paper presents a simple test to measure the precision of this temporal prediction window. We applied this test to a non-musician population and two groups of musicians: brass players and pianists in order to investigate the effect of training. We furthermore asked how the sensitivity to auditory-motor delays builds on other auditory and auditory-motor tasks.</p>
<p>Our findings suggest that the brain has a relatively large window of integration (102±65 ms for musicians, and 180±104 ms for nonmusicians) within which an action and its resulting effect are judged as simultaneous. These delay detection thresholds are larger by almost an order of magnitude than thresholds for judging two auditory events as asynchronous, which are between 2 and 60 msec
<xref rid="pone.0087176-Exner1" ref-type="bibr">[15]</xref>
<xref rid="pone.0087176-Szymaszek1" ref-type="bibr">[18]</xref>
. However, the present findings are in line with cross-modal sensory asynchrony judgements: visual and auditory events simultaneity thresholds are usually around 150 ms
<xref rid="pone.0087176-Stevenson1" ref-type="bibr">[52]</xref>
.</p>
<p>Participants' capacity to judge simultaneity of movement and sound can be explained as a combination of auditory temporal prediction precision (anisochrony) and sensorimotor synchronisation accuracy. That is, the delay detection task appears to tap into basic cognitive capacities of auditory processing and auditory-motor coupling. Both of these capacities varied with musicianship, and the latter did not additionally explain variance in the thresholds of audio-motor synchrony judgements.</p>
<p>These results suggest that, first of all, sensitivity to auditory-motor delays can be trained. Musicians were more precise in temporally predicting the auditory effect of their movement, as evidenced by their lower threshold in the delay detection task. This finding is in line with the finding that musical training improves performance in a variety of tasks
<xref rid="pone.0087176-Ehrl1" ref-type="bibr">[37]</xref>
<xref rid="pone.0087176-Repp3" ref-type="bibr">[40]</xref>
,
<xref rid="pone.0087176-Kraus1" ref-type="bibr">[53]</xref>
and also induces functional and structural brain changes
<xref rid="pone.0087176-Gaser1" ref-type="bibr">[54]</xref>
,
<xref rid="pone.0087176-Herholz1" ref-type="bibr">[55]</xref>
. In addition, the finding is in line with previous studies showing that temporal order judgements (TOJ) improve with training
<xref rid="pone.0087176-Alais1" ref-type="bibr">[56]</xref>
,
<xref rid="pone.0087176-Powers1" ref-type="bibr">[57]</xref>
. However, a limitation to our present study is that we cannot conclude whether musicianship caused lower delay detection thresholds, or vice versa. It is conceivable that people with lower delay detection thresholds enrolled in musical training more than those who had higher delay detection thresholds. In order to conclusively answer this question, a future longitudinal study could follow a sample of participants and randomly assign them to music- or other (control) training. If such a study would find a reduction in delay detection threshold in the group participating in musical training, but not in the control group, this could prove that delay detection thresholds are lowered as a result of musical training.</p>
<p>Secondly, musicianship appears to improve delay detection thresholds indirectly. That is, musicianship did not significantly influence delay detection sensitivity when performance on purely auditory (anisochrony) or auditory-motor tasks (sensorimotor synchronisation) was taken into account. This means that auditory-motor delay detection is not a capacity that is specifically improved by music training. If this were so, we would have expected to find differences in correlations between the tests (delay detection, anisochrony and sensorimotor synchronisation) between our groups. This was not the case. Instead, the results suggest that musical training improves sensorimotor synchronisation capacities as well as auditory temporal precision, both of which then lead to an improvement in delay detection threshold. A potential alternative explanation for our finding is that musicianship affects a latent variable (or latent variables), not measured here, and that this variable improves delay detection sensitivity, auditory temporal precision and sensorimotor synchronisation.</p>
<p>Furthermore, the instrument that musicians played had no influence on delay detection sensitivity, or any of our other tasks. This suggests that the specifics of how an instrument responds to finger movements of the musician nor the acoustic features of the instrumental sound influence the capacity to detect delays between movement and sound.</p>
<p>Humans' conscious sensitivity to delays between their articulator movements and the produced speech sound is typically around 60–70 msec
<xref rid="pone.0087176-Yamamoto2" ref-type="bibr">[14]</xref>
, but implicit adjustments of speech rate to delayed feedback are reported from 50 msec delay onwards
<xref rid="pone.0087176-Swink1" ref-type="bibr">[32]</xref>
. These delays are below the thresholds observed here, but close to the thresholds we found for musicians. Humans accumulate many hours of speech practice (many more than even professional musicians could accumulate on their instrument) and therefore one will expect to find lower delay detection thresholds for vocal actions. This finding squares with the idea that training an action, be it speaking or playing an instrument, improves the temporal prediction of its sensory consequences. However, the particular instrument that the musicians trained to play (piano or brass instruments) did not influence sensitivity, suggesting that perhaps delay sensitivity is specific to the effector: the articulators in the case of speech and the hand in the case of piano playing and brass playing, and perhaps also the mouth in the case of brass playing. Notice, however, that comparisons between music and speech are limited by the fact that there exist no control group with negligeable speech experience.</p>
<p>The present study has some limitations. It might be argued that the experimental setup of this study involves an inherent delay between the keystroke and the sound. Possibly, musicians who were exquisitely sensitive to delays considered even the shortest possible latency in our setup as asynchronous. However, if this were the case we would have expected participants to exhibit thresholds close to zero, which was not the case. Furthermore, as we have argued above, the thresholds we found for musicians were comparable to those found in speech.</p>
<p>A limitation of our comparison between pianists and brass players is that the difference between those groups might have been reduced due to the fact that many brass players had some piano experience. This is not a bias in our sample, but reflects the reality of musical education in which musicians are encouraged to practice a secondary instrument, and piano is a popular choice. Crucially, we found no differences in a post-hoc comparison among brass players between those with piano experience and those without it. Furthermore, the brass players without piano experience did not differ from the pianists.</p>
<p>Future studies could use the delay detection task to tap into temporal prediction capacities to investigate auditory-motor processing. The paradigm could also provide a precise quantification of temporal binding, which is the phenomenon that a person's self-generated sensory stimuli appear closer in time to the action that caused them than externally-generated sensory stimuli
<xref rid="pone.0087176-Haggard1" ref-type="bibr">[58]</xref>
.</p>
</sec>
<sec id="s5">
<title>Conclusions</title>
<p>The present findings suggest that the brain has a relatively large window of integration within which an action and its resulting effect are judged as simultaneous. Furthermore, musical expertise may narrow this window down, potentially due to more refined general temporal prediction capacities and improved auditory-motor synchronisation (as suggested by the data of anisochrony and sensorimotor synchronisation tasks, respectively). The presently proposed paradigm provides a simple test to estimate the precision of this prediction. Musicians' temporal predictions were more precise than that of nonmusicians, but there were no reliable differences between pianists and brass players. The thresholds correlated with a purely auditory threshold measure requiring the detection of a temporal irregularity in an otherwise isochronous sound sequence. Furthermore, they correlated with sensorimotor synchronisation performance. This suggests that musical training improves a set of auditory and auditory-motor capacities. These capacities are then used together to generate temporal predictions about the sensory consequences of our actions. The particular instrument as well as practice time has only a minor influence. This novel paradigm provides a simple test to estimate the strength of auditory-motor action-effect coupling that can readily be incorporated in a variety of studies investigating both healthy and patient populations.</p>
</sec>
</body>
<back>
<ack>
<p>We are greatly indebted to research assistant Phuong Mai Tran for implementing the experiment.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0087176-Blakemore1">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Blakemore</surname>
<given-names>SJ</given-names>
</name>
,
<name>
<surname>Rees</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Frith</surname>
<given-names>CD</given-names>
</name>
(
<year>1998</year>
)
<article-title>How do we predict the consequences of our actions? A functional imaging study</article-title>
.
<source>Neuropsychologia</source>
<volume>36</volume>
:
<fpage>521</fpage>
<lpage>529</lpage>
<pub-id pub-id-type="pmid">9705062</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Blakemore2">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Blakemore</surname>
<given-names>SJ</given-names>
</name>
,
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
,
<name>
<surname>Frith</surname>
<given-names>CD</given-names>
</name>
(
<year>1998</year>
)
<article-title>Central cancellation of self-produced tickle sensation</article-title>
.
<source>Nat Neurosci</source>
<volume>1</volume>
:
<fpage>635</fpage>
<lpage>640</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/2870">10.1038/2870</ext-link>
</comment>
<pub-id pub-id-type="pmid">10196573</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Eliades1">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Eliades</surname>
<given-names>SJ</given-names>
</name>
,
<name>
<surname>Wang</surname>
<given-names>X</given-names>
</name>
(
<year>2003</year>
)
<article-title>Sensory-Motor Interaction in the Primate Auditory Cortex During Self-Initiated Vocalizations</article-title>
.
<source>J Neurophysiol</source>
<volume>89</volume>
:
<fpage>2194</fpage>
<lpage>2207</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1152/jn.00627.2002">10.1152/jn.00627.2002</ext-link>
</comment>
<pub-id pub-id-type="pmid">12612021</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Martikainen1">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Martikainen</surname>
<given-names>MH</given-names>
</name>
,
<name>
<surname>Kaneko</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Hari</surname>
<given-names>R</given-names>
</name>
(
<year>2005</year>
)
<article-title>Suppressed responses to self-triggered sounds in the human auditory cortex</article-title>
.
<source>Cereb Cortex</source>
<volume>15</volume>
:
<fpage>299</fpage>
<lpage>302</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhh131">10.1093/cercor/bhh131</ext-link>
</comment>
<pub-id pub-id-type="pmid">15238430</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Aliu1">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Aliu</surname>
<given-names>SO</given-names>
</name>
,
<name>
<surname>Houde</surname>
<given-names>JF</given-names>
</name>
,
<name>
<surname>Nagarajan</surname>
<given-names>SS</given-names>
</name>
(
<year>2009</year>
)
<article-title>Motor-induced suppression of the auditory cortex</article-title>
.
<source>J Cogn Neurosci</source>
<volume>21</volume>
:
<fpage>791</fpage>
<lpage>802</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/jocn.2009.21055">10.1162/jocn.2009.21055</ext-link>
</comment>
<pub-id pub-id-type="pmid">18593265</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Fujisaki1">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fujisaki</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Shimojo</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Kashino</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Nishida</surname>
<given-names>S</given-names>
</name>
(
<year>2004</year>
)
<article-title>Recalibration of audiovisual simultaneity</article-title>
.
<source>Nat Neurosci</source>
<volume>7</volume>
:
<fpage>773</fpage>
<lpage>778</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nn1268">10.1038/nn1268</ext-link>
</comment>
<pub-id pub-id-type="pmid">15195098</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Kuling1">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kuling</surname>
<given-names>IA</given-names>
</name>
,
<name>
<surname>van Eijk</surname>
<given-names>RLJ</given-names>
</name>
,
<name>
<surname>Juola</surname>
<given-names>JF</given-names>
</name>
,
<name>
<surname>Kohlrausch</surname>
<given-names>A</given-names>
</name>
(
<year>2012</year>
)
<article-title>Effects of stimulus duration on audio-visual synchrony perception</article-title>
.
<source>Exp Brain Res</source>
<volume>221</volume>
:
<fpage>403</fpage>
<lpage>412</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-012-3182-9">10.1007/s00221-012-3182-9</ext-link>
</comment>
<pub-id pub-id-type="pmid">22821079</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Tanaka1">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Tanaka</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Asakawa</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Imai</surname>
<given-names>H</given-names>
</name>
(
<year>2011</year>
)
<article-title>The change in perceptual synchrony between auditory and visual speech after exposure to asynchronous speech</article-title>
.
<source>Neuroreport</source>
<volume>22</volume>
:
<fpage>684</fpage>
<lpage>688</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1097/WNR.0b013e32834a2724">10.1097/WNR.0b013e32834a2724</ext-link>
</comment>
<pub-id pub-id-type="pmid">21817926</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Yamamoto1">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Yamamoto</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Miyazaki</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Iwano</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Kitazawa</surname>
<given-names>S</given-names>
</name>
(
<year>2012</year>
)
<article-title>Bayesian calibration of simultaneity in audiovisual temporal order judgments</article-title>
.
<source>PLoS ONE</source>
<volume>7</volume>
:
<fpage>e40379</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0040379">10.1371/journal.pone.0040379</ext-link>
</comment>
<pub-id pub-id-type="pmid">22792297</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Freeman1">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Freeman</surname>
<given-names>ED</given-names>
</name>
,
<name>
<surname>Ipser</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Palmbaha</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Paunoiu</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Brown</surname>
<given-names>P</given-names>
</name>
,
<etal>et al</etal>
(
<year>2013</year>
)
<article-title>Sight and sound out of synch: Fragmentation and renormalisation of audiovisual integration and subjective timing</article-title>
.
<source>Cortex</source>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.cortex.2013.03.006">10.1016/j.cortex.2013.03.006</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0087176-Keetels1">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Keetels</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Vroomen</surname>
<given-names>J</given-names>
</name>
(
<year>2012</year>
)
<article-title>Exposure to delayed visual feedback of the hand changes motor-sensory synchrony perception</article-title>
.
<source>Exp Brain Res</source>
<volume>219</volume>
:
<fpage>431</fpage>
<lpage>440</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-012-3081-0">10.1007/s00221-012-3081-0</ext-link>
</comment>
<pub-id pub-id-type="pmid">22623088</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Rohde1">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rohde</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
(
<year>2012</year>
)
<article-title>To lead and to lag - forward and backward recalibration of perceived visuo-motor simultaneity</article-title>
.
<source>Front Psychol</source>
<volume>3</volume>
:
<fpage>599</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3389/fpsyg.2012.00599">10.3389/fpsyg.2012.00599</ext-link>
</comment>
<pub-id pub-id-type="pmid">23346063</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Sugano1">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sugano</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Keetels</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Vroomen</surname>
<given-names>J</given-names>
</name>
(
<year>2010</year>
)
<article-title>Adaptation to motor-visual and motor-auditory temporal lags transfer across modalities</article-title>
.
<source>Exp Brain Res</source>
<volume>201</volume>
:
<fpage>393</fpage>
<lpage>399</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-009-2047-3">10.1007/s00221-009-2047-3</ext-link>
</comment>
<pub-id pub-id-type="pmid">19851760</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Yamamoto2">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Yamamoto</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Kawabata</surname>
<given-names>H</given-names>
</name>
(
<year>2011</year>
)
<article-title>Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback</article-title>
.
<source>PLoS ONE</source>
<volume>6</volume>
:
<fpage>e29414</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0029414">10.1371/journal.pone.0029414</ext-link>
</comment>
<pub-id pub-id-type="pmid">22216275</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Exner1">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Exner</surname>
<given-names>S</given-names>
</name>
(
<year>1875</year>
)
<article-title>Experimentelle Untersuchung der einfachsten psychischen Prozesse. III</article-title>
.
<source>Pflugers Arch Gesammte Physiol Menschen Thiere</source>
<volume>11</volume>
:
<fpage>402</fpage>
<lpage>412</lpage>
</mixed-citation>
</ref>
<ref id="pone.0087176-BenArtzi1">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ben-Artzi</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Fostick</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Babkoff</surname>
<given-names>H</given-names>
</name>
(
<year>2005</year>
)
<article-title>Deficits in temporal-order judgments in dyslexia: evidence from diotic stimuli differing spectrally and from dichotic stimuli differing only by perceived location</article-title>
.
<source>Neuropsychologia</source>
<volume>43</volume>
:
<fpage>714</fpage>
<lpage>723</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuropsychologia.2004.08.004">10.1016/j.neuropsychologia.2004.08.004</ext-link>
</comment>
<pub-id pub-id-type="pmid">15721184</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Fostick1">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fostick</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Babkoff</surname>
<given-names>H</given-names>
</name>
(
<year>2013</year>
)
<article-title>Different Response Patterns Between Auditory Spectral and Spatial Temporal Order Judgment (TOJ)</article-title>
.
<source>Exp Psychol</source>
<volume>1</volume>
:
<fpage>1</fpage>
<lpage>12</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1027/1618-3169/a000216">10.1027/1618-3169/a000216</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0087176-Szymaszek1">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Szymaszek</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Szelag</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Sliwowska</surname>
<given-names>M</given-names>
</name>
(
<year>2006</year>
)
<article-title>Auditory perception of temporal order in humans: The effect of age, gender, listener practice and stimulus presentation mode</article-title>
.
<source>Neurosci Lett</source>
<volume>403</volume>
:
<fpage>190</fpage>
<lpage>194</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neulet.2006.04.062">10.1016/j.neulet.2006.04.062</ext-link>
</comment>
<pub-id pub-id-type="pmid">16750883</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Hirsh1">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hirsh</surname>
<given-names>IJ</given-names>
</name>
,
<name>
<surname>Sherrick</surname>
<given-names>CE</given-names>
<suffix>Jr</suffix>
</name>
(
<year>1961</year>
)
<article-title>Perceived order in different sense modalities</article-title>
.
<source>J Exp Psychol</source>
<volume>62</volume>
:
<fpage>423</fpage>
<pub-id pub-id-type="pmid">13907740</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Zampini1">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Zampini</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Shore</surname>
<given-names>DI</given-names>
</name>
,
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
(
<year>2003</year>
)
<article-title>Audiovisual temporal order judgments</article-title>
.
<source>Exp Brain Res</source>
<volume>152</volume>
:
<fpage>198</fpage>
<lpage>210</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-003-1536-z">10.1007/s00221-003-1536-z</ext-link>
</comment>
<pub-id pub-id-type="pmid">12879178</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Frissen1">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Frissen</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Ziat</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Campion</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Hayward</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Guastavino</surname>
<given-names>C</given-names>
</name>
(
<year>2012</year>
)
<article-title>The effects of voluntary movements on auditory-haptic and haptic-haptic temporal order judgments</article-title>
.
<source>Acta Psychol (Amst)</source>
<volume>141</volume>
:
<fpage>140</fpage>
<lpage>148</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.actpsy.2012.07.010">10.1016/j.actpsy.2012.07.010</ext-link>
</comment>
<pub-id pub-id-type="pmid">22964054</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-GarcaPrez1">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>García-Pérez</surname>
<given-names>MA</given-names>
</name>
,
<name>
<surname>Alcalá-Quintana</surname>
<given-names>R</given-names>
</name>
(
<year>2012</year>
)
<article-title>On the discrepant results in synchrony judgment and temporal-order judgment tasks: a quantitative model</article-title>
.
<source>Psychon Bull Rev</source>
<volume>19</volume>
:
<fpage>820</fpage>
<lpage>846</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/s13423-012-0278-y">10.3758/s13423-012-0278-y</ext-link>
</comment>
<pub-id pub-id-type="pmid">22829342</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Weiss1">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Weiss</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Scharlau</surname>
<given-names>I</given-names>
</name>
(
<year>2011</year>
)
<article-title>Simultaneity and temporal order perception: Different sides of the same coin? Evidence from a visual prior-entry study</article-title>
.
<source>Q J Exp Psychol</source>
<volume>64</volume>
:
<fpage>394</fpage>
<lpage>416</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1080/17470218.2010.495783">10.1080/17470218.2010.495783</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0087176-Vatakis1">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Vatakis</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Navarra</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Soto-Faraco</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
(
<year>2008</year>
)
<article-title>Audiovisual temporal adaptation of speech: temporal order versus simultaneity judgments</article-title>
.
<source>Exp Brain Res</source>
<volume>185</volume>
:
<fpage>521</fpage>
<lpage>529</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-007-1168-9">10.1007/s00221-007-1168-9</ext-link>
</comment>
<pub-id pub-id-type="pmid">17962929</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Donohue1">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Donohue</surname>
<given-names>SE</given-names>
</name>
,
<name>
<surname>Woldorff</surname>
<given-names>MG</given-names>
</name>
,
<name>
<surname>Mitroff</surname>
<given-names>SR</given-names>
</name>
(
<year>2010</year>
)
<article-title>Video game players show more precise multisensory temporal processing abilities</article-title>
.
<source>Atten Percept Psychophys</source>
<volume>72</volume>
:
<fpage>1120</fpage>
<lpage>1129</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/APP.72.4.1120">10.3758/APP.72.4.1120</ext-link>
</comment>
<pub-id pub-id-type="pmid">20436205</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Gates1">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gates</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Bradshaw</surname>
<given-names>JL</given-names>
</name>
,
<name>
<surname>Nettleton</surname>
<given-names>NC</given-names>
</name>
(
<year>1974</year>
)
<article-title>Effect of different delayed auditory feedback intervals on a music performance task</article-title>
.
<source>Percept Psychophys</source>
<volume>15</volume>
:
<fpage>21</fpage>
<lpage>25</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/BF03205822">10.3758/BF03205822</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0087176-Pfordresher1">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pfordresher</surname>
<given-names>P</given-names>
</name>
(
<year>2003</year>
)
<article-title>Auditory feedback in music performance: Evidence for a dissociation of sequencing and timing</article-title>
.
<source>J Exp Psychol Hum Percept Perform</source>
<volume>29</volume>
:
<fpage>949</fpage>
<lpage>964</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1037/0096-1523.29.5.949">10.1037/0096-1523.29.5.949</ext-link>
</comment>
<pub-id pub-id-type="pmid">14585016</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Pfordresher2">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pfordresher</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Palmer</surname>
<given-names>C</given-names>
</name>
(
<year>2002</year>
)
<article-title>Effects of delayed auditory feedback on timing of music performance</article-title>
.
<source>Psychol Res</source>
<volume>66</volume>
:
<fpage>71</fpage>
<lpage>79</lpage>
<pub-id pub-id-type="pmid">11963280</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Stuart1">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stuart</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Kalinowski</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Rastatter</surname>
<given-names>MP</given-names>
</name>
,
<name>
<surname>Lynch</surname>
<given-names>K</given-names>
</name>
(
<year>2002</year>
)
<article-title>Effect of delayed auditory feedback on normal speakers at two speech rates</article-title>
.
<source>J Acoust Soc Am</source>
<volume>111</volume>
:
<fpage>2237</fpage>
<lpage>2241</lpage>
<pub-id pub-id-type="pmid">12051443</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Yates1">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Yates</surname>
<given-names>AJ</given-names>
</name>
(
<year>1963</year>
)
<article-title>Delayed auditory feedback</article-title>
.
<source>Psychol Bull</source>
<volume>60</volume>
:
<fpage>213</fpage>
<lpage>232</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1037/h0044155">10.1037/h0044155</ext-link>
</comment>
<pub-id pub-id-type="pmid">14002534</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Kaspar1">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kaspar</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Rübeling</surname>
<given-names>H</given-names>
</name>
(
<year>2011</year>
)
<article-title>Rhythmic versus phonemic interference in delayed auditory feedback</article-title>
.
<source>J Speech Lang Hear Res JSLHR</source>
<volume>54</volume>
:
<fpage>932</fpage>
<lpage>943</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1044/1092-4388(2010/10-0109)">10.1044/1092-4388(2010/10-0109)</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0087176-Swink1">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Swink</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Stuart</surname>
<given-names>A</given-names>
</name>
(
<year>2012</year>
)
<article-title>The effect of gender on the N1-P2 auditory complex while listening and speaking with altered auditory feedback</article-title>
.
<source>Brain Lang</source>
<volume>122</volume>
:
<fpage>25</fpage>
<lpage>33</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.bandl.2012.04.007">10.1016/j.bandl.2012.04.007</ext-link>
</comment>
<pub-id pub-id-type="pmid">22564750</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Repp1">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Repp</surname>
<given-names>BH</given-names>
</name>
(
<year>2005</year>
)
<article-title>Sensorimotor synchronization: a review of the tapping literature</article-title>
.
<source>Psychon Bull Rev</source>
<volume>12</volume>
:
<fpage>969</fpage>
<lpage>992</lpage>
<pub-id pub-id-type="pmid">16615317</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Repp2">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Repp</surname>
<given-names>BH</given-names>
</name>
,
<name>
<surname>Su</surname>
<given-names>Y-H</given-names>
</name>
(
<year>2013</year>
)
<article-title>Sensorimotor synchronization: A review of recent research (2006–2012)</article-title>
.
<source>Psychon Bull Rev</source>
<fpage>1</fpage>
<lpage>50</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/s13423-012-0371-2">10.3758/s13423-012-0371-2</ext-link>
</comment>
<pub-id pub-id-type="pmid">23090749</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Tillmann1">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Tillmann</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Stevens</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Keller</surname>
<given-names>PE</given-names>
</name>
(
<year>2011</year>
)
<article-title>Learning of timing patterns and the development of temporal expectations</article-title>
.
<source>Psychol Res</source>
<volume>75</volume>
:
<fpage>243</fpage>
<lpage>258</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00426-010-0302-7">10.1007/s00426-010-0302-7</ext-link>
</comment>
<pub-id pub-id-type="pmid">20683612</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Wing1">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wing</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Kristofferson</surname>
<given-names>AB</given-names>
</name>
(
<year>1973</year>
)
<article-title>Response delays and the timing of discrete motor responses</article-title>
.
<source>Percept Psychophys</source>
<volume>14</volume>
:
<fpage>5</fpage>
<lpage>12</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/BF03198607">10.3758/BF03198607</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0087176-Ehrl1">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ehrlé</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Samson</surname>
<given-names>S</given-names>
</name>
(
<year>2005</year>
)
<article-title>Auditory discrimination of anisochrony: influence of the tempo and musical backgrounds of listeners</article-title>
.
<source>Brain Cogn</source>
<volume>58</volume>
:
<fpage>133</fpage>
<lpage>147</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.bandc.2004.09.014">10.1016/j.bandc.2004.09.014</ext-link>
</comment>
<pub-id pub-id-type="pmid">15878734</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Yee1">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>Yee</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Holleran</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Jones</surname>
<given-names>MR</given-names>
</name>
(
<year>1994</year>
)
<article-title>Sensitivity to event timing in regular and irregular sequences: influences of musical skill</article-title>
.
<source>Percept Psychophys</source>
<volume>56</volume>
:
<fpage>461</fpage>
<lpage>471</lpage>
<pub-id pub-id-type="pmid">7984401</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Aschersleben1">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Aschersleben</surname>
<given-names>G</given-names>
</name>
(
<year>2002</year>
)
<article-title>Temporal Control of Movements in Sensorimotor Synchronization</article-title>
.
<source>Brain Cogn</source>
<volume>48</volume>
:
<fpage>66</fpage>
<lpage>79</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1006/brcg.2001.1304">10.1006/brcg.2001.1304</ext-link>
</comment>
<pub-id pub-id-type="pmid">11812033</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Repp3">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Repp</surname>
<given-names>BH</given-names>
</name>
(
<year>2004</year>
)
<article-title>On the nature of phase attraction in sensorimotor synchronization with interleaved auditory sequences</article-title>
.
<source>Hum Mov Sci</source>
<volume>23</volume>
:
<fpage>389</fpage>
<lpage>413</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.humov.2004.08.014">10.1016/j.humov.2004.08.014</ext-link>
</comment>
<pub-id pub-id-type="pmid">15541525</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Hyde1">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hyde</surname>
<given-names>KL</given-names>
</name>
,
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
(
<year>2004</year>
)
<article-title>Brains That Are Out of Tune but in Time</article-title>
.
<source>Psychol Sci</source>
<volume>15</volume>
:
<fpage>356</fpage>
<lpage>360</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.0956-7976.2004.00683.x">10.1111/j.0956-7976.2004.00683.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">15102148</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Green1">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Green</surname>
<given-names>DM</given-names>
</name>
(
<year>1993</year>
)
<article-title>A maximum-likelihood method for estimating thresholds in a yes–no task</article-title>
.
<source>J Acoust Soc Am</source>
<volume>93</volume>
:
<fpage>2096</fpage>
<lpage>2105</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1121/1.406696">10.1121/1.406696</ext-link>
</comment>
<pub-id pub-id-type="pmid">8473622</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Gu1">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gu</surname>
<given-names>X</given-names>
</name>
,
<name>
<surname>Green</surname>
<given-names>DM</given-names>
</name>
(
<year>1994</year>
)
<article-title>Further studies of a maximum-likelihood yes–no procedure</article-title>
.
<source>J Acoust Soc Am</source>
<volume>96</volume>
:
<fpage>93</fpage>
<lpage>101</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1121/1.410378">10.1121/1.410378</ext-link>
</comment>
<pub-id pub-id-type="pmid">8064025</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Saberi1">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Saberi</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Green</surname>
<given-names>DM</given-names>
</name>
(
<year>1997</year>
)
<article-title>Evaluation of maximum-likelihood estimators in nonintensive auditory psychophysics</article-title>
.
<source>Percept Psychophys</source>
<volume>59</volume>
:
<fpage>867</fpage>
<lpage>876</lpage>
<pub-id pub-id-type="pmid">9270361</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Leek1">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Leek</surname>
<given-names>MR</given-names>
</name>
,
<name>
<surname>Dubno</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>He</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Ahlstrom</surname>
<given-names>JB</given-names>
</name>
(
<year>2000</year>
)
<article-title>Experience with a yes-no single-interval maximum-likelihood procedure</article-title>
.
<source>J Acoust Soc Am</source>
<volume>107</volume>
:
<fpage>2674</fpage>
<lpage>2684</lpage>
<pub-id pub-id-type="pmid">10830389</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Fisher1">
<label>46</label>
<mixed-citation publication-type="book">Fisher NI (1995) Statistical Analysis of Circular Data. Cambridge University Press. 300 p.</mixed-citation>
</ref>
<ref id="pone.0087176-Drewing1">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Drewing</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Stenneken</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Cole</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Prinz</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Aschersleben</surname>
<given-names>G</given-names>
</name>
(
<year>2004</year>
)
<article-title>Timing of bimanual movements and deafferentation: implications for the role of sensory movement effects</article-title>
.
<source>Exp Brain Res</source>
<volume>158</volume>
:
<fpage>50</fpage>
<lpage>57</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-004-1870-9">10.1007/s00221-004-1870-9</ext-link>
</comment>
<pub-id pub-id-type="pmid">15007586</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Helmuth1">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Helmuth</surname>
<given-names>LL</given-names>
</name>
,
<name>
<surname>Ivry</surname>
<given-names>RB</given-names>
</name>
(
<year>1996</year>
)
<article-title>When two hands are better than one: reduced timing variability during bimanual movements</article-title>
.
<source>J Exp Psychol Hum Percept Perform</source>
<volume>22</volume>
:
<fpage>278</fpage>
<lpage>293</lpage>
<pub-id pub-id-type="pmid">8934844</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Keele1">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Keele</surname>
<given-names>SW</given-names>
</name>
,
<name>
<surname>Pokorny</surname>
<given-names>RA</given-names>
</name>
,
<name>
<surname>Corcos</surname>
<given-names>DM</given-names>
</name>
,
<name>
<surname>Ivry</surname>
<given-names>R</given-names>
</name>
(
<year>1985</year>
)
<article-title>Do perception and motor production share common timing mechanisms: A correlational analysis</article-title>
.
<source>Acta Psychol (Amst)</source>
<volume>60</volume>
:
<fpage>173</fpage>
<lpage>191</lpage>
<pub-id pub-id-type="pmid">4091033</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Bakeman1">
<label>50</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bakeman</surname>
<given-names>R</given-names>
</name>
(
<year>2005</year>
)
<article-title>Recommended effect size statistics for repeated measures designs</article-title>
.
<source>Behav Res Methods</source>
<volume>37</volume>
:
<fpage>379</fpage>
<lpage>384</lpage>
<pub-id pub-id-type="pmid">16405133</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Friston1">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>Friston</surname>
<given-names>K</given-names>
</name>
(
<year>2012</year>
)
<article-title>Prediction, perception and agency</article-title>
.
<source>Int J Psychophysiol</source>
<volume>83</volume>
:
<fpage>248</fpage>
<lpage>252</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.ijpsycho.2011.11.014">10.1016/j.ijpsycho.2011.11.014</ext-link>
</comment>
<pub-id pub-id-type="pmid">22178504</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Stevenson1">
<label>52</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stevenson</surname>
<given-names>RA</given-names>
</name>
,
<name>
<surname>Wallace</surname>
<given-names>MT</given-names>
</name>
(
<year>2013</year>
)
<article-title>Multisensory temporal integration: task and stimulus dependencies</article-title>
.
<source>Exp Brain Res</source>
<volume>227</volume>
:
<fpage>249</fpage>
<lpage>261</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-013-3507-3">10.1007/s00221-013-3507-3</ext-link>
</comment>
<pub-id pub-id-type="pmid">23604624</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Kraus1">
<label>53</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kraus</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Chandrasekaran</surname>
<given-names>B</given-names>
</name>
(
<year>2010</year>
)
<article-title>Music training for the development of auditory skills</article-title>
.
<source>Nat Rev Neurosci</source>
<volume>11</volume>
:
<fpage>599</fpage>
<lpage>605</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nrn2882">10.1038/nrn2882</ext-link>
</comment>
<pub-id pub-id-type="pmid">20648064</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Gaser1">
<label>54</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gaser</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Schlaug</surname>
<given-names>G</given-names>
</name>
(
<year>2003</year>
)
<article-title>Brain Structures Differ between Musicians and Non-Musicians</article-title>
.
<source>J Neurosci</source>
<volume>23</volume>
:
<fpage>9240</fpage>
<lpage>9245</lpage>
<pub-id pub-id-type="pmid">14534258</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Herholz1">
<label>55</label>
<mixed-citation publication-type="journal">
<name>
<surname>Herholz</surname>
<given-names>SC</given-names>
</name>
,
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
(
<year>2012</year>
)
<article-title>Musical Training as a Framework for Brain Plasticity: Behavior, Function, and Structure</article-title>
.
<source>Neuron</source>
<volume>76</volume>
:
<fpage>486</fpage>
<lpage>502</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuron.2012.10.011">10.1016/j.neuron.2012.10.011</ext-link>
</comment>
<pub-id pub-id-type="pmid">23141061</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Alais1">
<label>56</label>
<mixed-citation publication-type="journal">
<name>
<surname>Alais</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Cass</surname>
<given-names>J</given-names>
</name>
(
<year>2010</year>
)
<article-title>Multisensory Perceptual Learning of Temporal Order: Audiovisual Learning Transfers to Vision but Not Audition</article-title>
.
<source>PLoS ONE</source>
<volume>5</volume>
:
<fpage>e11283</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0011283">10.1371/journal.pone.0011283</ext-link>
</comment>
<pub-id pub-id-type="pmid">20585664</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Powers1">
<label>57</label>
<mixed-citation publication-type="journal">
<name>
<surname>Powers</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Hillock</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Wallace</surname>
<given-names>MT</given-names>
</name>
(
<year>2009</year>
)
<article-title>Perceptual training narrows the temporal window of multisensory binding</article-title>
.
<source>J Neurosci</source>
<volume>29</volume>
:
<fpage>12265</fpage>
<lpage>12274</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.3501-09.2009">10.1523/JNEUROSCI.3501-09.2009</ext-link>
</comment>
<pub-id pub-id-type="pmid">19793985</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0087176-Haggard1">
<label>58</label>
<mixed-citation publication-type="journal">
<name>
<surname>Haggard</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Clark</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Kalogeras</surname>
<given-names>J</given-names>
</name>
(
<year>2002</year>
)
<article-title>Voluntary action and conscious awareness</article-title>
.
<source>Nat Neurosci</source>
<volume>5</volume>
:
<fpage>382</fpage>
<lpage>385</lpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nn827">10.1038/nn827</ext-link>
</comment>
<pub-id pub-id-type="pmid">11896397</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002330 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002330 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3911931
   |texte=   Thresholds of Auditory-Motor Coupling Measured with a Simple Task in Musicians and Non-Musicians: Was the Sound Simultaneous to the Key Press?
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:24498299" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024