Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Moving in time: Bayesian causal inference explains movement coordination to auditory beats

Identifieur interne : 003017 ( Ncbi/Merge ); précédent : 003016; suivant : 003018

Moving in time: Bayesian causal inference explains movement coordination to auditory beats

Auteurs : Mark T. Elliott [Royaume-Uni] ; Alan M. Wing [Royaume-Uni] ; Andrew E. Welchman [Royaume-Uni]

Source :

RBID : PMC:4046422

Abstract

Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved.


Url:
DOI: 10.1098/rspb.2014.0751
PubMed: 24850915
PubMed Central: 4046422

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4046422

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Moving in time: Bayesian causal inference explains movement coordination to auditory beats</title>
<author>
<name sortKey="Elliott, Mark T" sort="Elliott, Mark T" uniqKey="Elliott M" first="Mark T." last="Elliott">Mark T. Elliott</name>
<affiliation wicri:level="1">
<nlm:aff id="af1">
<addr-line>School of Psychology</addr-line>
,
<institution>University of Birmingham</institution>
,
<addr-line>Edgbaston B15 2TT</addr-line>
,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Wing, Alan M" sort="Wing, Alan M" uniqKey="Wing A" first="Alan M." last="Wing">Alan M. Wing</name>
<affiliation wicri:level="1">
<nlm:aff id="af1">
<addr-line>School of Psychology</addr-line>
,
<institution>University of Birmingham</institution>
,
<addr-line>Edgbaston B15 2TT</addr-line>
,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Welchman, Andrew E" sort="Welchman, Andrew E" uniqKey="Welchman A" first="Andrew E." last="Welchman">Andrew E. Welchman</name>
<affiliation wicri:level="1">
<nlm:aff id="af2">
<addr-line>Department of Psychology</addr-line>
,
<institution>University of Cambridge</institution>
,
<addr-line>Cambridge CB2 3EB</addr-line>
,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24850915</idno>
<idno type="pmc">4046422</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4046422</idno>
<idno type="RBID">PMC:4046422</idno>
<idno type="doi">10.1098/rspb.2014.0751</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">002455</idno>
<idno type="wicri:Area/Pmc/Curation">002455</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000A95</idno>
<idno type="wicri:Area/Ncbi/Merge">003017</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Moving in time: Bayesian causal inference explains movement coordination to auditory beats</title>
<author>
<name sortKey="Elliott, Mark T" sort="Elliott, Mark T" uniqKey="Elliott M" first="Mark T." last="Elliott">Mark T. Elliott</name>
<affiliation wicri:level="1">
<nlm:aff id="af1">
<addr-line>School of Psychology</addr-line>
,
<institution>University of Birmingham</institution>
,
<addr-line>Edgbaston B15 2TT</addr-line>
,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Wing, Alan M" sort="Wing, Alan M" uniqKey="Wing A" first="Alan M." last="Wing">Alan M. Wing</name>
<affiliation wicri:level="1">
<nlm:aff id="af1">
<addr-line>School of Psychology</addr-line>
,
<institution>University of Birmingham</institution>
,
<addr-line>Edgbaston B15 2TT</addr-line>
,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Welchman, Andrew E" sort="Welchman, Andrew E" uniqKey="Welchman A" first="Andrew E." last="Welchman">Andrew E. Welchman</name>
<affiliation wicri:level="1">
<nlm:aff id="af2">
<addr-line>Department of Psychology</addr-line>
,
<institution>University of Cambridge</institution>
,
<addr-line>Cambridge CB2 3EB</addr-line>
,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Proceedings of the Royal Society B: Biological Sciences</title>
<idno type="ISSN">0962-8452</idno>
<idno type="eISSN">1471-2954</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Grahn, Ja" uniqKey="Grahn J">JA Grahn</name>
</author>
<author>
<name sortKey="Rowe, Jb" uniqKey="Rowe J">JB Rowe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
<author>
<name sortKey="Chen, Jl" uniqKey="Chen J">JL Chen</name>
</author>
<author>
<name sortKey="Penhune, Vb" uniqKey="Penhune V">VB Penhune</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grahn, Ja" uniqKey="Grahn J">JA Grahn</name>
</author>
<author>
<name sortKey="Brett, M" uniqKey="Brett M">M Brett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mesgarani, N" uniqKey="Mesgarani N">N Mesgarani</name>
</author>
<author>
<name sortKey="Chang, Ef" uniqKey="Chang E">EF Chang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wing, Am" uniqKey="Wing A">AM Wing</name>
</author>
<author>
<name sortKey="Endo, S" uniqKey="Endo S">S Endo</name>
</author>
<author>
<name sortKey="Bradbury, A" uniqKey="Bradbury A">A Bradbury</name>
</author>
<author>
<name sortKey="Vorberg, D" uniqKey="Vorberg D">D Vorberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moore, Bcj" uniqKey="Moore B">BCJ Moore</name>
</author>
<author>
<name sortKey="Gockel, He" uniqKey="Gockel H">HE Gockel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Körding</name>
</author>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U Beierholm</name>
</author>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Quartz, S" uniqKey="Quartz S">S Quartz</name>
</author>
<author>
<name sortKey="Tenenbaum, Jb" uniqKey="Tenenbaum J">JB Tenenbaum</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L Shams</name>
</author>
<author>
<name sortKey="Beierholm, Ur" uniqKey="Beierholm U">UR Beierholm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sato, Y" uniqKey="Sato Y">Y Sato</name>
</author>
<author>
<name sortKey="Toyoizumi, T" uniqKey="Toyoizumi T">T Toyoizumi</name>
</author>
<author>
<name sortKey="Aihara, K" uniqKey="Aihara K">K Aihara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roach, Nw" uniqKey="Roach N">NW Roach</name>
</author>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J Heron</name>
</author>
<author>
<name sortKey="Mcgraw, Pv" uniqKey="Mcgraw P">PV McGraw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, Rj" uniqKey="Van Beers R">RJ van Beers</name>
</author>
<author>
<name sortKey="Sittig, Ac" uniqKey="Sittig A">AC Sittig</name>
</author>
<author>
<name sortKey="Van Der Gon Denier, Jj" uniqKey="Van Der Gon Denier J">JJ van der Gon Denier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ban, H" uniqKey="Ban H">H Ban</name>
</author>
<author>
<name sortKey="Preston, Tj" uniqKey="Preston T">TJ Preston</name>
</author>
<author>
<name sortKey="Meeson, A" uniqKey="Meeson A">A Meeson</name>
</author>
<author>
<name sortKey="Welchman, Ae" uniqKey="Welchman A">AE Welchman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hillis, Jm" uniqKey="Hillis J">JM Hillis</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Saunders, Ja" uniqKey="Saunders J">JA Saunders</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elliott, Mt" uniqKey="Elliott M">MT Elliott</name>
</author>
<author>
<name sortKey="Wing, Am" uniqKey="Wing A">AM Wing</name>
</author>
<author>
<name sortKey="Welchman, Ae" uniqKey="Welchman A">AE Welchman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elliott, Mt" uniqKey="Elliott M">MT Elliott</name>
</author>
<author>
<name sortKey="Welchman, Ae" uniqKey="Welchman A">AE Welchman</name>
</author>
<author>
<name sortKey="Wing, Am" uniqKey="Wing A">AM Wing</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peters, M" uniqKey="Peters M">M Peters</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grotowski, Zib" uniqKey="Grotowski Z">ZIB Grotowski</name>
</author>
<author>
<name sortKey="Botev, Jf" uniqKey="Botev J">JF Botev</name>
</author>
<author>
<name sortKey="Kroese, Dp" uniqKey="Kroese D">DP Kroese</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vorberg, D" uniqKey="Vorberg D">D Vorberg</name>
</author>
<author>
<name sortKey="Wing, Am" uniqKey="Wing A">AM Wing</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vorberg, D" uniqKey="Vorberg D">D Vorberg</name>
</author>
<author>
<name sortKey="Schulze, Hh" uniqKey="Schulze H">HH Schulze</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Repp, Bh" uniqKey="Repp B">BH Repp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oldenhuis, R" uniqKey="Oldenhuis R">R Oldenhuis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schwarz, G" uniqKey="Schwarz G">G Schwarz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocker, Aa" uniqKey="Stocker A">AA Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, Ep" uniqKey="Simoncelli E">EP Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Repp, Bh" uniqKey="Repp B">BH Repp</name>
</author>
<author>
<name sortKey="Su, Y H" uniqKey="Su Y">Y-H Su</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wing, Am" uniqKey="Wing A">AM Wing</name>
</author>
<author>
<name sortKey="Doumas, M" uniqKey="Doumas M">M Doumas</name>
</author>
<author>
<name sortKey="Welchman, Ae" uniqKey="Welchman A">AE Welchman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elliott, Mt" uniqKey="Elliott M">MT Elliott</name>
</author>
<author>
<name sortKey="Wing, Am" uniqKey="Wing A">AM Wing</name>
</author>
<author>
<name sortKey="Welchman, Ae" uniqKey="Welchman A">AE Welchman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hanson, J" uniqKey="Hanson J">J Hanson</name>
</author>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J Heron</name>
</author>
<author>
<name sortKey="Whitaker, D" uniqKey="Whitaker D">D Whitaker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roach, Nw" uniqKey="Roach N">NW Roach</name>
</author>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J Heron</name>
</author>
<author>
<name sortKey="Whitaker, D" uniqKey="Whitaker D">D Whitaker</name>
</author>
<author>
<name sortKey="Mcgraw, Pv" uniqKey="Mcgraw P">PV McGraw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teki, S" uniqKey="Teki S">S Teki</name>
</author>
<author>
<name sortKey="Grube, M" uniqKey="Grube M">M Grube</name>
</author>
<author>
<name sortKey="Kumar, S" uniqKey="Kumar S">S Kumar</name>
</author>
<author>
<name sortKey="Griffiths, Td" uniqKey="Griffiths T">TD Griffiths</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Coull, Jt" uniqKey="Coull J">JT Coull</name>
</author>
<author>
<name sortKey="Davranche, K" uniqKey="Davranche K">K Davranche</name>
</author>
<author>
<name sortKey="Nazarian, B" uniqKey="Nazarian B">B Nazarian</name>
</author>
<author>
<name sortKey="Vidal, F" uniqKey="Vidal F">F Vidal</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Proc Biol Sci</journal-id>
<journal-id journal-id-type="iso-abbrev">Proc. Biol. Sci</journal-id>
<journal-id journal-id-type="publisher-id">RSPB</journal-id>
<journal-id journal-id-type="hwp">royprsb</journal-id>
<journal-title-group>
<journal-title>Proceedings of the Royal Society B: Biological Sciences</journal-title>
</journal-title-group>
<issn pub-type="ppub">0962-8452</issn>
<issn pub-type="epub">1471-2954</issn>
<publisher>
<publisher-name>The Royal Society</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24850915</article-id>
<article-id pub-id-type="pmc">4046422</article-id>
<article-id pub-id-type="doi">10.1098/rspb.2014.0751</article-id>
<article-id pub-id-type="publisher-id">rspb20140751</article-id>
<article-categories>
<subj-group subj-group-type="hwp-journal-coll">
<subject>1001</subject>
<subject>44</subject>
<subject>14</subject>
<subject>133</subject>
</subj-group>
<subj-group subj-group-type="heading">
<subject>Research Articles</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Moving in time: Bayesian causal inference explains movement coordination to auditory beats</article-title>
<alt-title alt-title-type="short">Bayesian inference in movement timing</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Elliott</surname>
<given-names>Mark T.</given-names>
</name>
<xref ref-type="aff" rid="af1">1</xref>
<xref ref-type="corresp" rid="cor1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wing</surname>
<given-names>Alan M.</given-names>
</name>
<xref ref-type="aff" rid="af1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Welchman</surname>
<given-names>Andrew E.</given-names>
</name>
<xref ref-type="aff" rid="af2">2</xref>
</contrib>
</contrib-group>
<aff id="af1">
<label>1</label>
<addr-line>School of Psychology</addr-line>
,
<institution>University of Birmingham</institution>
,
<addr-line>Edgbaston B15 2TT</addr-line>
,
<country>UK</country>
</aff>
<aff id="af2">
<label>2</label>
<addr-line>Department of Psychology</addr-line>
,
<institution>University of Cambridge</institution>
,
<addr-line>Cambridge CB2 3EB</addr-line>
,
<country>UK</country>
</aff>
<author-notes>
<corresp id="cor1">e-mail:
<email>m.t.elliott@bham.ac.uk</email>
</corresp>
</author-notes>
<pub-date pub-type="ppub">
<day>7</day>
<month>7</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>7</day>
<month>7</month>
<year>2014</year>
</pub-date>
<pmc-comment> PMC Release delay is 0 months and 0 days and was based on the . </pmc-comment>
<volume>281</volume>
<issue>1786</issue>
<elocation-id>20140751</elocation-id>
<history>
<date date-type="received">
<day>27</day>
<month>3</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>16</day>
<month>4</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement></copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p>© 2014 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/">http://creativecommons.org/licenses/by/3.0/</ext-link>
, which permits unrestricted use, provided the original author and source are credited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:type="simple" xlink:href="rspb20140751.pdf"></self-uri>
<abstract>
<p>Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved.</p>
</abstract>
<kwd-group>
<kwd>movement synchronization</kwd>
<kwd>Bayesian inference</kwd>
<kwd>sensory integration</kwd>
<kwd>motor timing</kwd>
</kwd-group>
<custom-meta-group>
<custom-meta>
<meta-name>cover-date</meta-name>
<meta-value>July 7, 2014</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<label>1.</label>
<title>Introduction</title>
<p>Many human activities, from holding a conversation to playing music, have their basis in our ability to extract meaningful temporal structure from incoming sounds. For rhythmical structures in particular, humans identify key events and extract the underlying ‘beat’ of the auditory signals [
<xref rid="RSPB20140751C1" ref-type="bibr">1</xref>
]. Such auditory rhythms promote movements ‘in time’ with the beat with little apparent effort [
<xref rid="RSPB20140751C2" ref-type="bibr">2</xref>
,
<xref rid="RSPB20140751C3" ref-type="bibr">3</xref>
], as demonstrated through the capacity to dance or play along to music comprising multiple rhythmic streams. Yet, for such complex stimuli, it is unknown how temporal events are extracted and chosen as the targets to which movements are synchronized.</p>
<p>The complexity of incoming auditory signals is partially reduced by early sensory processing that filters out irrelevant auditory information [
<xref rid="RSPB20140751C4" ref-type="bibr">4</xref>
]. Nevertheless, auditory signals of interest may still consist of multiple components. For instance, keeping in time with other players in a quartet involves sensing different sequences of tones (e.g. the notes played on the viola versus cello) that share an underlying rhythm but are likely to fluctuate in relative phase, depending on how well each player can remain in time with the rest of the group [
<xref rid="RSPB20140751C5" ref-type="bibr">5</xref>
]. Based on these discrepancies, the brain must determine whether to integrate relevant components into a single stream or treat them as separate [
<xref rid="RSPB20140751C6" ref-type="bibr">6</xref>
].</p>
<p>In multisensory settings, the decision to integrate cues or treat them as separate sources is well captured using the Bayesian framework of causal inference [
<xref rid="RSPB20140751C7" ref-type="bibr">7</xref>
<xref rid="RSPB20140751C10" ref-type="bibr">10</xref>
], based on the statistical probability that sensory events relate to a single event in the environment versus multiple events. If there is evidence that sensations originate from a single environmental cause, the sensory cues are combined in a statistically optimal way across modalities to gain the best estimate of an object or event [
<xref rid="RSPB20140751C11" ref-type="bibr">11</xref>
<xref rid="RSPB20140751C13" ref-type="bibr">13</xref>
]. Within a single modality, there is also evidence for statistically optimal combination, for instance in combining distinct visual features such as disparity, motion or texture [
<xref rid="RSPB20140751C14" ref-type="bibr">14</xref>
<xref rid="RSPB20140751C16" ref-type="bibr">16</xref>
]. Critically, however, such integration is believed to be mandatory. Here, we test for the integration of within-modality auditory cues to time. We evaluate whether the brain applies a causal inference process to determine the circumstances under which auditory sequences (distinguished only by tone frequency) should be integrated into a coherent estimate of rhythm or separated into distinct events.</p>
<p>First, we develop a Bayesian causal inference model for movement synchronization that describes the scenarios under which a regular stream of sensory cues from same-modality sources are integrated. Then, we test the model by asking participants to tap their index finger in time to auditory sequences that comprised two auditory metronomes presented simultaneously with equal mean tempo. To test the causal inference process, we manipulated these cues in two different ways. First, we applied a phase offset between the metronomes, such that the beats from one metronome occurred shortly before the other. Second, we varied the temporal reliability of the metronomes such that rather than having isochronous beat onsets, they varied randomly around the (underlying) isochronous onset time (referred to as ‘jitter’). By adding small levels of jitter to one metronome and large levels of jitter to the other, we manipulated the relative reliability of the two metronome sources [
<xref rid="RSPB20140751C17" ref-type="bibr">17</xref>
]. Thus, we were able to observe changes in the timing and variability of participants' finger taps and assess the conditions under which the cues appeared to be integrated or treated as separate. Finally, we test four models and fit the simulated results to the experimental data to investigate whether causal inference best describes the observed results. We found that a causal inference model that adjusts for a consistent phase offset between cues demonstrated a fit close to the empirical data, exceeding that of alternative models based on the exclusive integration or exclusive separation of cues.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<label>2.</label>
<title>Material and methods</title>
<sec id="s2a">
<label>(a)</label>
<title>Participants</title>
<p>Staff and students from the University of Birmingham were recruited to participate in the experiment. Participants provided informed consent and were screened for sensory and motor deficits. Nine participants (seven male, four left-handed, mean age: 29.8 ± 5.9 years) took part. Five participants had some musical expertise (i.e. currently play a musical instrument; mean years of experience = 10.8).</p>
</sec>
<sec id="s2b">
<label>(b)</label>
<title>Experimental set-up</title>
<p>Participants sat at a table wearing a pair of headphones (Sennheiser EH150) through which the auditory metronome cues were presented. They tapped the index finger of their dominant hand in time to the metronome on to a wooden surface mounted on a force sensor. Responses were registered using a data acquisition device (USB-6229, National Instruments Inc., USA). Metronome presentation was controlled using the M
<sc>at</sc>
TAP toolbox [
<xref rid="RSPB20140751C18" ref-type="bibr">18</xref>
].</p>
</sec>
<sec id="s2c">
<label>(c)</label>
<title>Metronome stimuli</title>
<p>The auditory stimuli consisted of two independently controlled auditory metronomes (metronome A, pitch 700 Hz; and metronome B, pitch 1400 Hz;
<xref ref-type="fig" rid="RSPB20140751F1">figure 1</xref>
<italic>a</italic>
). The metronomes were offset in phase (0, 50, 100 or 150 ms), such that metronome B followed metronome A (pilot testing established that pitch order (low–high versus high–low) did not influence synchronization behaviour). Metronome beats lasted 30 ms and the period was varied randomly (by the same amount for A and B) across trials with a value between 470 and 530 ms, to minimize learning of tempo across trials and encourage adaptive correction within trials.
<fig id="RSPB20140751F1" position="float">
<label>Figure 1.</label>
<caption>
<p>(
<italic>a</italic>
) An illustration showing the timing relationships between the two metronomes and the calculation of asynchronies. The square pulses show the regular onset time of the metronome beats (before jitter applied). Both metronomes had the same underlying period, and metronome B had a phase-offset delay from metronome A of
<italic>ϕ</italic>
= 0, 50, 100 or 150 ms. To create temporal uncertainty, a random perturbation (‘jitter’) was added to the regular onset time of each beat. The s.d. of the jitter distribution differed between metronomes (A, B: {0, 0 ms}; {10, 50 ms}; {50, 10 ms} (depicted)). Asynchronies (
<italic>A</italic>
) were calculated between the finger-tap onsets and the onset of metronome A, before jitter was applied. (
<italic>b</italic>
) Probability distributions of metronome beat onsets. Illustration showing the expected distributions of beat onsets relative to the regular beat onset of metronome A (0 ms). Distributions are shown for each phase offset (row) and jitter condition (column). (Online version in colour.)</p>
</caption>
<graphic xlink:href="rspb20140751-g1"></graphic>
</fig>
</p>
<p>To manipulate metronome reliability, we applied temporal jitter independently to each metronome (
<xref ref-type="fig" rid="RSPB20140751F1">figure 1</xref>
<italic>b</italic>
) by perturbing the regular onset of the metronome beat by a random value sampled from a Gaussian distribution (
<italic>μ</italic>
= 0;
<italic>σ</italic>
= 10 or 50 ms). We tested the effect of differing reliabilities between the two metronomes and whether this would influence finger movement onsets and variability. Hence, we used the following jitter conditions (s.d. for metronome A and metronome B): {10, 50 ms}, {50, 10 ms} and a baseline condition where both metronomes were reliable {0, 0 ms}.</p>
<p>Participants were not explicitly informed that the auditory cues consisted of two metronomes. Instead, they were instructed to ‘tap in time to the metronome’ with some trials appearing ‘harder to tap along to than others’. This instruction was intended to encourage participants to use both cues and not attempt to ignore one in favour of the other.</p>
<p>Participants completed 10 trials per condition (12 conditions in all), with each trial consisting of 30 metronome beats. Conditions were randomized across trials to minimize any prior expectation about the metronome statistics building up across trials. To allow participants to build up prior knowledge of the metronome within a trial, analyses were performed on the tap-metronome asynchronies of beats 15–28 (the last two were ignored to discount anticipation or termination effects at the trial end [
<xref rid="RSPB20140751C19" ref-type="bibr">19</xref>
]).</p>
<p>To determine baseline movement synchronization behaviour, we also presented 30 trials on which a single metronome was presented, where the degree of jitter applied was varied across trials (0, 10 or 50 ms).</p>
</sec>
<sec id="s2d">
<label>(d)</label>
<title>Analysis</title>
<p>To quantify synchronization behaviour, we measured the time difference between the onset of the participants' finger taps and the metronome beat (asynchrony;
<xref ref-type="fig" rid="RSPB20140751F1">figure 1</xref>
<italic>a</italic>
). We referenced all metronome beats relative to the onset of metronome A (prior to any jitter perturbations) to provide a consistent, static reference point for all trials regardless of condition. Negative asynchronies indicated that the finger tap preceded the onset of the beat. We calculated the s.d. of asynchronies within a trial across participants and conditions. A repeated measures ANOVA was used to determine any significant effects of phase offset or jitter on the asynchrony s.d. We quantified the distribution of asynchronies for each participant, grouped by condition and tested the experimental data for unimodality or bimodality using Gaussian mixture models (GMMs) with either one or two centres. Mean asynchronies were then calculated based on the best fitting GMM distribution.</p>
<p>For comparison with the simulated asynchrony distributions of the models we tested, we fit the empirical data with probability density functions (PDFs). These were estimated using a Gaussian kernel density estimator (KDE) [
<xref rid="RSPB20140751C20" ref-type="bibr">20</xref>
] method implemented in M
<sc>atlab</sc>
[
<xref rid="RSPB20140751C21" ref-type="bibr">21</xref>
].</p>
</sec>
<sec id="s2e">
<label>(e)</label>
<title>Sensorimotor synchronization based on a causal inference model</title>
<p>Here, we outline the features of the simulated task where an observer uses Bayesian causal inference to synchronize their movements. An overview is shown schematically in
<xref ref-type="fig" rid="RSPB20140751F2">figure 2</xref>
while the full model derivation is provided in the electronic supplementary material, A.
<fig id="RSPB20140751F2" position="float">
<label>Figure 2.</label>
<caption>
<p>Schematic of the causal chain. (
<italic>a</italic>
) Common stream: signals are generated from a common source (
<italic>s</italic>
<sub>A</sub>
=
<italic>s</italic>
<sub>B</sub>
=
<italic>s</italic>
). The sensory likelihood distributions for the two metronome signals are modelled by Gaussian distributions
<italic>N</italic>
(
<italic>s</italic>
,
<italic>σ</italic>
<sub>A</sub>
) and
<italic>N</italic>
(
<italic>s</italic>
,
<italic>σ</italic>
<sub>B</sub>
), respectively, where
<italic>σ</italic>
<sub>A</sub>
,
<italic>σ</italic>
<sub>B</sub>
describes the uncertainty in the sensory registration. The observer has an expectation of where the
<italic>m</italic>
th beat should occur, centred around a time
<italic>μ</italic>
<sub>p</sub>
relative to the true beat
<italic>s</italic>
. This prior expectation is equal to the difference between their beat onset estimate
<italic>ŝ</italic>
and the true onset time
<italic>s</italic>
on the preceding (
<italic>m</italic>
− 1th) beat and is modelled as a Gaussian distribution
<italic>N</italic>
(
<italic>μ</italic>
<sub>p</sub>
,
<italic>σ</italic>
<sub>p</sub>
), where
<italic>σ</italic>
<sub>p</sub>
defines the strength of the prior. The observer estimates the cue onset times
<italic>t</italic>
<sub>A</sub>
and
<italic>t</italic>
<sub>B</sub>
, which are sampled from the likelihood distributions. Using the information from
<italic>μ</italic>
<sub>p</sub>
,
<italic>t</italic>
<sub>A</sub>
and
<italic>t</italic>
<sub>B</sub>
, the causal inference process results in evidence that the signals define a common beat (
<italic>C</italic>
= 1), and the estimated signal onset time
<italic>ŝ</italic>
is calculated as a weighted average of
<italic>μ</italic>
<sub>p</sub>
,
<italic>t</italic>
<sub>A</sub>
and
<italic>t</italic>
<sub>B</sub>
. The reliability of the three distributions
<inline-formula>
<inline-graphic xlink:href="rspb20140751-i1.jpg"></inline-graphic>
</inline-formula>
defines the relative weightings. The observer plans their movement to coincide with the estimated beat
<italic>ŝ</italic>
, introducing motor noise
<italic>σ</italic>
<sub>M</sub>
, and an anticipation effect
<italic>d</italic>
, which results in the observable asynchrony between the movement
<italic>r</italic>
, and the true beat
<italic>s</italic>
. (
<italic>b</italic>
) Independent streams: two signals are generated from independent sources (
<italic>s</italic>
<sub>A</sub>
and
<italic>s</italic>
<sub>B</sub>
). The sensory estimation process is the same as for (
<italic>a</italic>
); however, the prior is defined as the difference between their beat onset estimate
<italic>ŝ</italic>
and the true onset time
<italic>s</italic>
<sub>A</sub>
on the
<italic>m</italic>
− 1th beat. Based on
<italic>μ</italic>
<sub>p</sub>
,
<italic>t</italic>
<sub>A</sub>
and
<italic>t</italic>
<sub>B</sub>
, the causal inference model has more evidence that signals are independent (
<italic>C</italic>
= 2). Signal onset estimates
<italic>ŝ</italic>
<sub>A</sub>
and
<italic>ŝ</italic>
<sub>B</sub>
are therefore calculated independently as the weighted average of
<italic>μ</italic>
<sub>p</sub>
,
<italic>t</italic>
<sub>A</sub>
and
<italic>μ</italic>
<sub>p</sub>
,
<italic>t</italic>
<sub>B</sub>
, respectively. Similarly, the relative weightings are proportional to their reliabilities,
<inline-formula>
<inline-graphic xlink:href="rspb20140751-i2.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="rspb20140751-i3.jpg"></inline-graphic>
</inline-formula>
. As the observer has two estimated signal onsets, they select one with which to synchronize their movement. That is, the observer will define the signal onset estimate to be either
<italic>ŝ</italic>
=
<italic>ŝ</italic>
<sub>A</sub>
(as depicted in figure), or
<italic>ŝ</italic>
=
<italic>ŝ</italic>
<sub>B</sub>
. This choice varies for each beat, with observers' esoteric preferences dominating the relative reliability of the two signals. The observer plans their movement to coincide with the estimated beat
<italic>ŝ</italic>
, introducing motor noise
<italic>σ</italic>
<sub>M</sub>
, and an anticipation effect
<italic>d</italic>
. This results in the observable asynchrony between the movement
<italic>r</italic>
, and the referenced true beat (always
<italic>s</italic>
<sub>A</sub>
, to match experimental analyses). (Online version in colour.)</p>
</caption>
<graphic xlink:href="rspb20140751-g2"></graphic>
</fig>
</p>
<p>The observer's task is to synchronize their movements to rhythmic auditory cues presented to them. The cues consist of two discrete tones of different pitch (
<italic>s</italic>
<sub>A</sub>
and
<italic>s</italic>
<sub>B</sub>
). The observer must estimate the onsets of the underlying beats produced by the auditory cues to make movements in synchrony with those beats. They do this using a causal inference process based on: (i) the likelihood of the onsets of the two auditory cues, whose true onset times are corrupted by sensory noise; and (ii) the prior expectation of where the beat will occur, which is based on the previous beat onset estimate [
<xref rid="RSPB20140751C22" ref-type="bibr">22</xref>
,
<xref rid="RSPB20140751C23" ref-type="bibr">23</xref>
]. The causal inference process allows the observer to determine whether the two auditory cues should form a single common beat and hence combine the likelihood of the two beats with the prior to obtain the estimated onset time of that beat (
<italic>ŝ</italic>
;
<xref ref-type="fig" rid="RSPB20140751F2">figure 2</xref>
<italic>a</italic>
). Alternatively, if the causal inference process indicates that the two auditory cues are in fact independent, then two beat onset times are estimated (
<italic>ŝ</italic>
<sub>A</sub>
,
<italic>ŝ</italic>
<sub>B</sub>
) based on the prior and likelihood of each independent cue onset (
<xref ref-type="fig" rid="RSPB20140751F2">figure 2</xref>
<italic>b</italic>
). In this latter scenario, the observer must choose one of the estimated beat onsets as the target for movement synchronization. Here, we assume the observer has a bias in selecting one stream over the other (regardless of the cue statistics). Based on this bias, the observer will select auditory cues from stream A on a certain proportion of trials, and stream B for the complementary remainder of trials. The beat from the selected stream defines the single beat onset estimate (
<italic>ŝ</italic>
). Finally, the observer produces a motor action which is aligned with the estimated beat onset time, but subjected to a negative motor delay (representing the anticipation effect observed in many sensorimotor synchronization studies see [
<xref rid="RSPB20140751C24" ref-type="bibr">24</xref>
]) and motor noise. The resulting output we observe is an asynchrony between the observer's movement and the ‘actual’ beat which, to be consistent with the experimental analyses, we take to be the true onset time of auditory cue
<italic>s</italic>
<sub>A</sub>
.</p>
</sec>
<sec id="s2f">
<label>(f)</label>
<title>Model comparisons</title>
<p>We compared three alternative models to the causal inference model described above (denoted CI), to see which best described the experimental data. First, we tested a model of mandatory integration (MI), where the observer always considers the two cues to form a single common beat, regardless of the cue statistics. Similarly, we considered a mandatory separation (MS) model, where the observer always deems the cues independent and estimates the onset for two corresponding independent beats (subsequently selecting the preferred beat for movement synchronization). Finally, we tested an alternative causal inference model that included ‘phase-offset adaptation’ (CI
<sub>PA</sub>
), where any
<italic>consistent</italic>
phase offset between cue A and B across beats is accounted for in the inference process and hence is disregarded in the judgement of whether the cues form a single beat or independent beats. We tested this extra model to see whether the fixed phase offsets we applied in the experimental conditions resulted in participants adjusting their judgement of the level of deviation required between cues before they were considered separate beats.</p>
</sec>
<sec id="s2g">
<label>(g)</label>
<title>Parameter fitting to participant data</title>
<p>We developed simulations to test whether the models detailed above could describe the experimental results. For each model, we generated 2000 simulated finger-tap asynchronies for an observer synchronizing to auditory rhythmic cues that matched the statistics of the experimental phase offset and jitter conditions. The simulated asynchronies for each condition were converted into likelihood functions for the model using an optimized Gaussian KDE [
<xref rid="RSPB20140751C20" ref-type="bibr">20</xref>
]. Three free parameters were used to fit each participant's asynchrony data to the model output likelihood function for each condition (see the electronic supplementary material, A): the strength of the prior expectation of the time of the next beat (
<italic>σ</italic>
<sub>p</sub>
; range (10, 500) ms); the prior probability that the two cues will form a common single beat (
<italic>p</italic>
<sub>single</sub>
; range (0, 1), fixed to 0 for the MS model and 1 for the MI model) and the negative asynchrony offset (
<italic>d</italic>
; range (−100, 100) ms). A fourth free parameter,
<italic>β</italic>
was fitted to experimental data from a single condition (phase offset: 150 ms, jitter: {0, 0 ms}) to describe the proportion of time the observer shows preference to cues from stream A versus stream B. This parameter was applied to the remaining conditions. We used a global search algorithm [
<xref rid="RSPB20140751C25" ref-type="bibr">25</xref>
] that sequentially interchanged data between four different meta-heuristic optimizers (genetic algorithm, particle swarm optimization, differential evolution and simulated annealing) to ensure robust parameter optimization. The fitting algorithm minimized the negative log-likelihood of each participant's data for each simulated condition.</p>
<p>To test the relative fit of the models to the data, we used the Bayesian information criterion (BIC; [
<xref rid="RSPB20140751C26" ref-type="bibr">26</xref>
]). This measure shows the log-likelihood of the data given the model and penalizes for the number of free parameters. BIC scores were summed across conditions for each participant. The differences in BIC scores between models were calculated and averaged across participants.</p>
<p>Similarly, we calculated the goodness of fit using the BIC. To get an overall equivalent
<italic>r
<sup>2</sup>
</italic>
value, BIC values (
<italic>L</italic>
(
<italic>θ</italic>
)) were compared to two points of reference [
<xref rid="RSPB20140751C27" ref-type="bibr">27</xref>
]. The first point of reference was the BIC of the data to a probability distribution of the data itself (
<italic>L</italic>
(Max); fitted using a Gaussian KDE)—i.e. the best fit achievable. The second was the BIC of the data to a random distribution (
<italic>L</italic>
(Rand); fitted using a cubic spline)—i.e. giving a worst case fit. The goodness of fit
<italic>r
<sup>2</sup>
</italic>
was scaled to a value between 0 and 1 as follows:
<disp-formula id="RSPB20140751M21">
<label>2.1</label>
<graphic xlink:href="rspb20140751-e1.jpg" position="float"></graphic>
</disp-formula>
</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<label>3.</label>
<title>Results</title>
<sec id="s3a">
<label>(a)</label>
<title>Experimental results</title>
<p>To test the role of a causal inference process when synchronizing movements to multiple streams of auditory events, we asked participants to tap their index finger in time with beats defined by two metronomes (A and B) that could differ in their temporal reliability (jitter) and their phase offset (B relative to A). We measured the time difference (asynchrony) between the onsets of metronome A and the corresponding finger taps. A standard approach to quantifying synchrony performance is to calculate the variability (s.d.) of asynchronies across conditions [
<xref rid="RSPB20140751C28" ref-type="bibr">28</xref>
]. Here, we initially use that approach to identify the effect of the jitter and phase-offset conditions on participants' performance. Subsequently, we apply more detailed analyses on the asynchrony distributions.</p>
<p>We expected that when tapping to single metronome beats, the asynchrony s.d. would increase with increasing jitter applied to the metronome. By contrast, when two metronome streams were presented in parallel, one with high jitter, the other with low jitter, we predicted that participants would integrate the cues and the resulting asynchrony s.d. would remain low [
<xref rid="RSPB20140751C17" ref-type="bibr">17</xref>
,
<xref rid="RSPB20140751C29" ref-type="bibr">29</xref>
]. We further predicted that this integration effect would reduce with increasing phase offset, as participants become more ready to treat the cues as originating from independent beats. Under this scenario, we expected the asynchrony s.d. to increase with increasing phase offset between metronomes.</p>
<p>First, we manipulated metronome jitter to verify that asynchrony variability was affected, presenting a single metronome jittered by 0, 10 or 50 ms. We measured the asynchrony variability (s.d.) of the finger taps relative to the underlying unjittered metronome beats. As expected, increasing the jitter resulted in higher asynchrony variability (
<italic>F</italic>
<sub>2,16</sub>
= 37.6,
<italic>p</italic>
< 0.001;
<xref ref-type="fig" rid="RSPB20140751F3">figure 3</xref>
<italic>a</italic>
).
<fig id="RSPB20140751F3" position="float">
<label>Figure 3.</label>
<caption>
<p>(
<italic>a</italic>
) Asynchrony s.d. to single metronome presentations. The metronome was jittered by 0, 10 or 50 ms. Error bars show s.e.m. (
<italic>b</italic>
) Asynchrony s.d. in the dual metronome conditions as a function of phase offset and averaged across jitter conditions. Error bars show s.e.m. The horizontal grey bar indicates the asynchrony s.d. in the isochronous single metronome condition, ±1 s.e.m. (Online version in colour.)</p>
</caption>
<graphic xlink:href="rspb20140751-g3"></graphic>
</fig>
</p>
<p>Next, we considered synchronization performance when two metronomes (A and B) were simultaneously presented, one with high levels of jitter (50 ms) and the other only slightly jittered (10 ms). In particular, we focused on the zero phase-offset conditions and compared the asynchrony s.d. (averaged over the two jitter conditions: {10, 50 ms} and {50, 10 ms}) to that observed in the unjittered single metronome condition. Using a paired
<italic>t</italic>
-test, we found no significant difference between these two conditions (
<italic>t</italic>
<sub>8</sub>
= −0.71,
<italic>p</italic>
= 0.497). Hence, in contrast to the single metronome conditions where jitter substantially impacted on participants' performance, asynchrony variability remained low in the dual metronome condition, even though one of the metronomes was highly jittered. We further found no main effect of jitter on asynchrony s.d. (
<italic>F</italic>
<sub>1.2,9.5</sub>
= 1.5,
<italic>p</italic>
= 0.255) when we analysed all conditions for the dual metronome presentations. This means that asynchrony variability did not increase regardless of whether one of the metronomes was highly jittered or both were isochronous. These results highlight that participants were able to take advantage of the more reliable metronome to maintain their synchronization performance in the dual metronome conditions.</p>
<p>We found that, as predicted, an increasing phase offset between the two metronomes increased the asynchrony s.d. (
<italic>F</italic>
<sub>1.6,12.8</sub>
= 27.1,
<italic>p</italic>
< 0.001;
<xref ref-type="fig" rid="RSPB20140751F3">figure 3</xref>
<italic>b</italic>
). While there was no difference between 0 and 50 ms phase offsets (
<italic>p</italic>
= 0.311), asynchrony s.d. increased significantly at phase offsets of 100 ms (
<italic>p</italic>
= 0.042) and 150 ms (
<italic>p</italic>
= 0.001). We suggest that these results indicate two different strategies in participants' synchronization. At low phase offsets, participants are integrating the two cues into a single beat estimate, with the outcome that the asynchrony s.d. remains low. However, as the phase offset increases, participants are treating the cues independently and switching between them. This switching incurs a substantial increase in asynchrony variability, regardless of the jitter applied to each metronome.</p>
<p>To examine these apparent strategies in more detail, we considered the distributions of asynchronies in each condition. Visual inspection indicated unimodal distributions at low phase offsets (suggesting integration of cues) and bimodal distributions at larger offsets (suggesting independent targeting of the cues) (
<xref ref-type="fig" rid="RSPB20140751F4">figure 4</xref>
<italic>a</italic>
). We quantified this observation by fitting two GMMs to each participant's data: either with one centre (indicating a unimodal distribution) or two centres (indicating a bimodal distribution). The difference between the BIC values for the two GMM models was calculated to establish which provided a better fit. We found that at low phase offsets (0, 50 ms), a unimodal distribution was more likely, while bimodal distributions were more likely at 100 and 150 ms phase offsets (
<xref ref-type="fig" rid="RSPB20140751F4">figure 4</xref>
<italic>b</italic>
). Hence, it appeared that when the metronome cues were separated by an offset of around 100 ms or greater, participants did not treat them as a common beat, but rather as independent beats. The bimodal distributions were a result of participants switching their finger taps to be in synchrony with either of the two sources.
<fig id="RSPB20140751F4" position="float">
<label>Figure 4.</label>
<caption>
<p>(
<italic>a</italic>
) Histograms of asynchronies from the experimental data. Negative asynchronies indicate the tap preceded the onset of metronome A. Histograms are shown for each phase-offset condition (rows). Each column plots histograms for the different jitter conditions: {0, 0 ms}, {10, 50 ms} and {50, 10 ms}. (
<italic>b</italic>
) Difference in Bayesian BIC values for GMM fits as function of phase offset. GMMs were fitted to each participant's data with either one or two centres. The goodness of fit of the data to each of the GMMs was measured using the BIC. The difference in BIC was calculated across conditions for each participant and averaged to determine whether the histogram of data was more likely to originate from one or two distributions. Negative values indicate a better fit to the single centred GMM; positive values indicate a better fit to the two centred GMMs. Error bars show s.e.m. (
<italic>c</italic>
) Mean asynchronies based on GMM centres. Mean asynchronies were calculated based on the GMM centres that fitted best to each participant's data. The figure shows the mean asynchronies for unimodal distributions (diamond symbols) except where more than half the participants demonstrated better fits to bimodal distributions. Where this occurs, we plot the mean of both centres (lower value, squares; higher value, circles). Plots are shown for each jitter condition. Error bars show s.e.m. (Online version in colour.)</p>
</caption>
<graphic xlink:href="rspb20140751-g4"></graphic>
</fig>
</p>
<p>We further calculated the mean asynchronies to understand how the timing of participants' movements relative to the onsets of the metronome was affected by the experimental manipulations. We observed changes in mean asynchrony that depended on whether cues best fit a single or dual centred GMM, highlighting the different tapping strategies implemented by participants. For the low phase offsets, mean asynchrony was more positive for the 50 ms phase offset than the 0 ms offset (
<italic>F</italic>
<sub>1,8</sub>
= 9.31,
<italic>p</italic>
= 0.016;
<xref ref-type="fig" rid="RSPB20140751F4">figure 4</xref>
<italic>c</italic>
), indicating that participants were influenced by both metronome streams and hence integrating the cues. In situations where participants were more likely to show bimodal asynchrony distributions, we observed that one distribution was centred around a negative asynchrony, the other was positive, close to the onset of the second metronome, highlighting the tendency to follow one cue or the other.</p>
</sec>
<sec id="s3b">
<label>(b)</label>
<title>Model fits to the experimental data</title>
<p>The experimental data suggest that human participants apply different strategies under the different experimental conditions: at low phase offsets, data is unimodal with low variance regardless of jitter condition, suggesting integration of the signals takes place. At high phase offsets, data are bimodal and suggest switching behaviour in the use of the two metronomes. This indicates that neither a scenario based on exclusively integrating the timing signals nor one based exclusively on selecting one signal over the other is sufficient to explain the participants' behaviour. Formally, we tested two causal inference models and compared them against models of MI and MS. The first causal inference model (CI) inferred whether the auditory cues originated from a single beat or independent beats using the deviations between the signals caused by both a constant phase-offset and jitter manipulations; the second (CI
<sub>PA</sub>
) assumed participants adapted to the consistent phase offset between the cues and hence only based their inference on deviations due to the jitter. Using a global search optimization algorithm [
<xref rid="RSPB20140751C25" ref-type="bibr">25</xref>
], we fit the four free parameters (see the electronic supplementary material, A, table S2) by minimizing the BIC of each participant's data for each condition and model. Summing the BIC across conditions and averaging for each participant, we were able to compare how well each model explained the data.</p>
<p>We found that, overall, both causal inference models (CI and CI
<sub>PA</sub>
) outperformed the MI and MS models (
<xref ref-type="fig" rid="RSPB20140751F5">figure 5</xref>
<italic>a</italic>
) in terms of the BIC. Specifically, we found that the causal inference models outperformed MI in all conditions and MS in all but one condition (phase offset: 0 ms, jitter: {0, 0 ms}; see
<xref ref-type="table" rid="RSPB20140751TB1">table 1</xref>
). The general goodness of fit measure indicated the simulated data fit well with the experimental data (
<xref ref-type="fig" rid="RSPB20140751F5">figure 5</xref>
<italic>b</italic>
) and confirmed differences between the models (
<italic>F</italic>
<sub>3,24</sub>
= 2.736,
<italic>p</italic>
< 0.001), with a significantly higher
<italic>r
<sup>2</sup>
</italic>
of the CI
<sub>PA</sub>
model over the MS and MI models (
<xref ref-type="fig" rid="RSPB20140751F5">figure 5</xref>
<italic>c</italic>
).
<table-wrap id="RSPB20140751TB1" position="float">
<label>Table 1.</label>
<caption>
<p>Mean difference in BIC between the CI
<sub>PA</sub>
model and the mandatory separation (MS) model for each condition. (Positive values indicate the CI
<sub>PA</sub>
model is a better fit to the data.)</p>
</caption>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="char" char="." span="1"></col>
<col align="char" char="." span="1"></col>
<col align="char" char="." span="1"></col>
</colgroup>
<thead valign="bottom">
<tr>
<th align="left" rowspan="2" colspan="1">offset (ms)</th>
<th align="left" colspan="3" rowspan="1">jitter (A, B; ms)
<hr></hr>
</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1">0,0</th>
<th align="left" rowspan="1" colspan="1">10,50</th>
<th align="left" rowspan="1" colspan="1">50,10</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">0</td>
<td rowspan="1" colspan="1">−18.2</td>
<td rowspan="1" colspan="1">3.5</td>
<td rowspan="1" colspan="1">8.3</td>
</tr>
<tr>
<td rowspan="1" colspan="1">50</td>
<td rowspan="1" colspan="1">21.4</td>
<td rowspan="1" colspan="1">13.4</td>
<td rowspan="1" colspan="1">18.5</td>
</tr>
<tr>
<td rowspan="1" colspan="1">100</td>
<td rowspan="1" colspan="1">48.1</td>
<td rowspan="1" colspan="1">40.7</td>
<td rowspan="1" colspan="1">45.2</td>
</tr>
<tr>
<td rowspan="1" colspan="1">150</td>
<td rowspan="1" colspan="1">130.9</td>
<td rowspan="1" colspan="1">104.7</td>
<td rowspan="1" colspan="1">23.0</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="RSPB20140751F5" position="float">
<label>Figure 5.</label>
<caption>
<p>(
<italic>a</italic>
) Difference in BIC scores for model fits, relative to the causal inference model with phase-offset adaptation (CI
<sub>PA</sub>
). BIC scores were summed across conditions for each participant. The three alternative models compared were: causal inference without phase adaptation (CI), mandatory separation (MS) of the cues and mandatory integration (MI) of the cues. The difference between BIC scores for each model was calculated and averaged across participants. A positive value indicates that the model is a worse fit to the data compared with CI
<sub>PA</sub>
. (
<italic>b</italic>
) Simulated (CI
<sub>PA</sub>
model) versus empirical PDFs. The empirical asynchronies (black dashed line) and simulated asynchronies (shaded solid line) were pooled across all participants and converted to PDFs using a Gaussian kernel density estimation (KDE) algorithm. PDFs are shown for each phase-offset condition (rows) and jitter condition (columns). (
<italic>c</italic>
) Goodness of fit of each model to the experimental data. Goodness of fit was calculated using an index between 0 and 1 by comparing BIC of the data to the model (
<italic>L</italic>
(
<italic>θ</italic>
)) relative to: (i) a probability distribution of the data itself (
<italic>L</italic>
(Max)), and (ii) a random distribution
<italic>L</italic>
(Rand). Error bars show s.e.m. (
<italic>d</italic>
) Mean value of the prior probability of a single common beat (
<italic>p</italic>
<sub>single</sub>
) for each offset condition, averaged across participants and jitter conditions. The plot highlights the difference in
<italic>p</italic>
<sub>single</sub>
values for the CI (squares) versus CI
<sub>PA</sub>
(circles) models. Error bars show s.e.m. (Online version in colour.)</p>
</caption>
<graphic xlink:href="rspb20140751-g5"></graphic>
</fig>
</p>
<p>Importantly, we found that the CI
<sub>PA</sub>
model outperformed the CI model in terms of the BIC. This was surprising as the model describes the observer adapting to a fixed phase offset over the course of a trial and discounting this offset when determining whether or not the cues should be integrated. This appears contrary to our empirical results where overall participants' strategies depended on the level of phase offset between the cues. This apparent contradiction can be accounted for by individual differences between participants. In particular, the phase-offset threshold between integration of cues and treating them independently varied across participants, with a minority demonstrating better single-centre (i.e. integration) GMM fits to their distributions even in the 150 ms phase-offset conditions. While the CI
<sub>PA</sub>
model introduced an additional fixed parameter in the form of subtracting the phase offset from any estimated deviation between cues (see the electronic supplementary material, A), we noted that this was subsequently modulated by the free parameter
<italic>p</italic>
<sub>single</sub>
(representing the prior probability of a common single beat). While
<italic>p</italic>
<sub>single</sub>
remained relatively constant across offsets for the CI model, it dropped as a function of offset for the CI
<sub>PA</sub>
model (
<xref ref-type="fig" rid="RSPB20140751F5">figure 5</xref>
<italic>d</italic>
). The effect of a reduced
<italic>p</italic>
<sub>single</sub>
value was to reduce the likelihood of judging a given pair of signals to be a common single beat. Hence, the CI
<sub>PA</sub>
model was more able to adapt to the different individual phase-offset thresholds for integration we observed across participants, than the CI model. This resulted in a better fit of the model to each participant's data.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<label>4.</label>
<title>Discussion</title>
<p>Many simple and skilled actions depend on moving in time with signals that are embedded in complex auditory streams. Often these streams share an underlying rhythm but differ in temporal regularity and phase. Here, we tested how human movement synchronization to two simultaneously presented auditory metronomes was affected by differences in the phase and regularity between the two timing signals. We found that when the phase offset was low, participants showed evidence of integrating the signals, minimizing the variability in the timing of their responses. By contrast, when phase offset was high, responses were more variable and there was alternation in the response cue used for synchronization (
<italic>viz</italic>
., bimodal distributions of movement asynchronies;
<xref ref-type="fig" rid="RSPB20140751F4">figure 4</xref>
). This behaviour was well captured by a Bayesian causal inference model. The model used four free parameters and was able to explain situations in which participants chose to integrate signals or keep them separate. We applied two causal inference models to the data, one considering phase-offset adaptation (CI
<sub>PA</sub>
) and one without adaptation (CI). Simulations indicated the causal inference models provided a better account for the experimental data than other models based on integration (MI) or selection (MS) only. The causal inference model incorporating phase-offset adaptation showed a better fit than the causal inference model without phase-offset adaptation. However, the free parameter describing the prior probability of considering the cues to form a single beat was found to be a function of phase offset in the CI
<sub>PA</sub>
model. This suggests that the improved fit resulted from this model being more flexible to differences in the phase-offset thresholds at which individual participants switched from integrating cues to treating them independently. Overall, the results suggest that humans exploit a Bayesian inference process to control movement timing in situations where the underlying beat structure of auditory signals needs to be resolved.</p>
<p>Evidence for optimal cue integration for multisensory signals has been demonstrated across a range of tasks in both spatial and temporal contexts [
<xref rid="RSPB20140751C11" ref-type="bibr">11</xref>
<xref rid="RSPB20140751C13" ref-type="bibr">13</xref>
]. Moreover, multisensory cue integration has been shown to result in improved motor performance in a movement timing task [
<xref rid="RSPB20140751C17" ref-type="bibr">17</xref>
,
<xref rid="RSPB20140751C29" ref-type="bibr">29</xref>
,
<xref rid="RSPB20140751C30" ref-type="bibr">30</xref>
]. This improvement was consistent with a maximum-likelihood model of integration based on the reliability of each sensory modality. However, an important step in this process involves deciding whether or not different sensory cues relate to the same environmental event: if not, the signals should be kept separate and not integrated [
<xref rid="RSPB20140751C7" ref-type="bibr">7</xref>
<xref rid="RSPB20140751C10" ref-type="bibr">10</xref>
]. Here, we focused on this process of deciding whether different sensory events relate to a common underlying beat. Our empirical data provided evidence that participants do integrate two auditory signals into a single estimate of a metronome beat, but the probability of integration was a function of both the time offset between the signals and their relative temporal regularity. For instance, when participants tapped to simultaneous beats defined by two metronomes, with one jittered by 10 ms, the other 50 ms, the variability in finger-tap asynchronies remained equal to that when tapping to a single isochronous metronome. This demonstrated that synchronization variability was reduced (relative to the individual signals) by integrating information from the individual (noisy) timing cues. This would not be expected if participants had simply switched between timing cues. Moreover, if participants had simply chosen the more reliable signal, the phase offset between the metronomes would not have been important. By contrast, we found an effect of phase offset, with movement asynchronies for the high phase-offset conditions (100 and 150 ms) producing bimodal distributions of movement timing, indicating that the two streams were treated independently at these high phase offsets. This highlights that a strategy based on integration alone could not account for the participants' behaviour. We suppose two modes of behaviour—integration versus separation, with a causal inference process that decided whether to integrate signals or treat them independently based on their relative reliability and temporal separation. The evidence for causal inference taking place was further corroborated by the single anomalous condition where we found causal inference did not provide the best fit. Namely, when the phase offset was zero and both metronomes were isochronous, we found that a causal inference model did not show a better fit than MS (
<xref ref-type="table" rid="RSPB20140751TB1">table 1</xref>
). This can be explained by the fact that the participants only heard a single metronome cue in this condition (the tones overlap on every beat forming a dyad) and therefore integration could not have taken place. The model was unable to account for this scenario and by integrating the cues described a lower expected variability than was observed, resulting in a poor fit. The predicted poor fit owing to this anomaly provides further evidence that causal inference is taking place in other conditions, where the fit is consistently better than the alternative models.</p>
<p>Exposure to a repeated, consistent asynchrony between multisensory cues has been demonstrated to result in temporal recalibration, such that the point of subjective simultaneity is shifted to compensate for the offset [
<xref rid="RSPB20140751C31" ref-type="bibr">31</xref>
]. We considered whether participants would, in a similar way, learn the consistent phase offset between the beats and recalibrate in terms of judging whether those cues defined a common beat or not. We therefore tested two causal inference models: CI
<sub>PA</sub>
where the phase offset is accounted for (i.e. not considered) when inferring the causality of the signals, and CI that included the phase offset in determining signal causality. We found that the CI
<sub>PA</sub>
model showed a better fit than the CI model. This was surprising given the empirical data showing an effect of phase offset on the distributions. Further examination of the free parameters indicated that
<italic>p</italic>
<sub>single</sub>
became a function of phase offset for the CI
<sub>PA</sub>
model (
<xref ref-type="fig" rid="RSPB20140751F5">figure 5</xref>
<italic>d</italic>
). We suggest that these results indicate that participants do not disregard the full phase offset in their inference of a single common beat, but instead account for a proportion of the offset. The CI
<sub>PA</sub>
model was more able to account for the differences across participants in the proportion of the phase offset accounted for and hence resulted in a better fit. The results from the model can be considered to show that participants generally underestimate the actual phase offset presented. Under similar circumstances, where repeated exposure to a temporal offset between multisensory cues results in temporal recalibration, it has also been found that an underestimation occurs in the recalibration, explained by a bias in a neural population coding model [
<xref rid="RSPB20140751C32" ref-type="bibr">32</xref>
].</p>
<p>Finally, it is interesting to speculate about the cortical circuits underlying the behaviour we observed. There is evidence that different areas of the brain are recruited during beat processing versus duration or interval processing [
<xref rid="RSPB20140751C2" ref-type="bibr">2</xref>
,
<xref rid="RSPB20140751C33" ref-type="bibr">33</xref>
]: measuring absolute time duration recruits the inferior olive and cerebellum, while if intervals are regular (forming a beat) a striato-thalamo-cortical network is recruited [
<xref rid="RSPB20140751C33" ref-type="bibr">33</xref>
]. Here, we added jitter to the metronomes such that the cues presented varied from an isochronous beat (0 ms jitter) through to a highly unpredictable beat (50 ms jitter). Therefore, we may expect a switch from beat-based processing to duration-based activations depending on the level of jitter applied to the metronome. However, by integrating the two cues into a single stream, temporal irregularity is minimized, which is likely to emphasize a beat-based structure. Minimizing the variability and extracting a beat maintains a predictive timing process (rather than reactive) [
<xref rid="RSPB20140751C34" ref-type="bibr">34</xref>
], which is what we observed through the typically negative asynchronies to the cue onsets.</p>
<p>In conclusion, when synchronizing actions to auditory streams, people determine whether the cues define a common underlying beat or independent beats through Bayesian inference. As an extension of this work, it would be interesting to investigate the presence of causal inference in real group settings (e.g. a string quartet [
<xref rid="RSPB20140751C5" ref-type="bibr">5</xref>
]), using the foundations of the modelling work we have described here.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgements</title>
<p>We thank Konrad Körding for helpful suggestions on the development of our model, Ulrik Beierholm for comments on the draft manuscript and Dagmar Fraser for assistance in data collection.</p>
</ack>
<sec id="s5">
<title></title>
<p>Experimental protocols were approved by the Science, Technology, Engineering and Mathematics Ethical Review Committee at the University of Birmingham.</p>
</sec>
<sec id="s6">
<title>Funding statement</title>
<p>This work was supported by
<funding-source>Engineering and Physical Sciences Research Council</funding-source>
(
<award-id>EP/I031030/1</award-id>
) and
<funding-source>Wellcome Trust</funding-source>
(
<award-id>095183/Z/10/Z)</award-id>
grants.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="RSPB20140751C1">
<label>1</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grahn</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Rowe</surname>
<given-names>JB</given-names>
</name>
</person-group>
<year>2012</year>
<article-title>Finding and feeling the musical beat: striatal dissociations between detection and prediction of regularity</article-title>
.
<source>Cereb. Cortex</source>
<volume>23</volume>
,
<fpage>913</fpage>
<lpage>921</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhs083">doi:10.1093/cercor/bhs083</ext-link>
)
<pub-id pub-id-type="pmid">22499797</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C2">
<label>2</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Penhune</surname>
<given-names>VB</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>When the brain plays music: auditory-motor interactions in music perception and production</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>8</volume>
,
<fpage>547</fpage>
<lpage>558</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nrn2152">doi:10.1038/nrn2152</ext-link>
)
<pub-id pub-id-type="pmid">17585307</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C3">
<label>3</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grahn</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Brett</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Rhythm and beat perception in motor areas of the brain</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>19</volume>
,
<fpage>893</fpage>
<lpage>906</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/jocn.2007.19.5.893">doi:10.1162/jocn.2007.19.5.893</ext-link>
)
<pub-id pub-id-type="pmid">17488212</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C4">
<label>4</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mesgarani</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>EF</given-names>
</name>
</person-group>
<year>2012</year>
<article-title>Selective cortical representation of attended speaker in multi-talker speech perception</article-title>
.
<source>Nature</source>
<volume>485</volume>
,
<fpage>233</fpage>
<lpage>236</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nature11020">doi:10.1038/nature11020</ext-link>
)
<pub-id pub-id-type="pmid">22522927</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C5">
<label>5</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wing</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Endo</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Bradbury</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Vorberg</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2014</year>
<article-title>Optimal feedback correction in string quartet synchronization</article-title>
.
<source>J. R. Soc. Interface</source>
<volume>11</volume>
,
<fpage>20131125</fpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1098/rsif.2013.1125">doi:10.1098/rsif.2013.1125</ext-link>
)
<pub-id pub-id-type="pmid">24478285</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C6">
<label>6</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moore</surname>
<given-names>BCJ</given-names>
</name>
<name>
<surname>Gockel</surname>
<given-names>HE</given-names>
</name>
</person-group>
<year>2012</year>
<article-title>Properties of auditory stream formation</article-title>
.
<source>Phil. Trans. R. Soc. B</source>
<volume>367</volume>
,
<fpage>919</fpage>
<lpage>931</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1098/rstb.2011.0355">doi:10.1098/rstb.2011.0355</ext-link>
)
<pub-id pub-id-type="pmid">22371614</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C7">
<label>7</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Quartz</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>JB</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Causal inference in multisensory perception</article-title>
.
<source>PLoS ONE</source>
<volume>2</volume>
,
<fpage>e943</fpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0000943">doi:10.1371/journal.pone.0000943</ext-link>
)
<pub-id pub-id-type="pmid">17895984</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C8">
<label>8</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shams</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>UR</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Causal inference in perception</article-title>
.
<source>Trends Cogn. Sci.</source>
<volume>14</volume>
,
<fpage>425</fpage>
<lpage>432</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.tics.2010.07.001">doi:10.1016/j.tics.2010.07.001</ext-link>
)
<pub-id pub-id-type="pmid">20705502</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C9">
<label>9</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sato</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Toyoizumi</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Aihara</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Bayesian inference explains perception of unity and ventriloquism aftereffect: identification of common sources of audiovisual stimuli</article-title>
.
<source>Neural Comp.</source>
<volume>19</volume>
,
<fpage>3335</fpage>
<lpage>3355</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/neco.2007.19.12.3335">doi:10.1162/neco.2007.19.12.3335</ext-link>
)</mixed-citation>
</ref>
<ref id="RSPB20140751C10">
<label>10</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roach</surname>
<given-names>NW</given-names>
</name>
<name>
<surname>Heron</surname>
<given-names>J</given-names>
</name>
<name>
<surname>McGraw</surname>
<given-names>PV</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration</article-title>
.
<source>Proc. R. Soc. B</source>
<volume>273</volume>
,
<fpage>2159</fpage>
<lpage>2168</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1098/rspb.2006.3578">doi:10.1098/rspb.2006.3578</ext-link>
)</mixed-citation>
</ref>
<ref id="RSPB20140751C11">
<label>11</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/415429a">doi:10.1038/415429a</ext-link>
)
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C12">
<label>12</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>The ventriloquist effect results from near-optimal bimodal integration</article-title>
.
<source>Curr. Biol.</source>
<volume>14</volume>
,
<fpage>257</fpage>
<lpage>262</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.cub.2004.01.029">doi:10.1016/j.cub.2004.01.029</ext-link>
)
<pub-id pub-id-type="pmid">14761661</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C13">
<label>13</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Beers</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Sittig</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>van der Gon Denier</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Integration of proprioceptive and visual position-information: an experimentally supported model</article-title>
.
<source>J. Neurophysiol.</source>
<volume>81</volume>
,
<fpage>1355</fpage>
<lpage>1364</lpage>
<pub-id pub-id-type="pmid">10085361</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C14">
<label>14</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ban</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Preston</surname>
<given-names>TJ</given-names>
</name>
<name>
<surname>Meeson</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Welchman</surname>
<given-names>AE</given-names>
</name>
</person-group>
<year>2012</year>
<article-title>The integration of motion and disparity cues to depth in dorsal visual cortex</article-title>
.
<source>Nat. Neurosci.</source>
<volume>15</volume>
,
<fpage>636</fpage>
<lpage>643</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nn.3046">doi:10.1038/nn.3046</ext-link>
)
<pub-id pub-id-type="pmid">22327475</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C15">
<label>15</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hillis</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Combining sensory information: mandatory fusion within, but not between, senses</article-title>
.
<source>Science</source>
<volume>298</volume>
,
<fpage>1627</fpage>
<lpage>1630</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1126/science.1075396">doi:10.1126/science.1075396</ext-link>
)
<pub-id pub-id-type="pmid">12446912</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C16">
<label>16</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Saunders</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Do humans optimally integrate stereo and texture information for judgments of surface slant?</article-title>
<source>Vis. Res.</source>
<volume>43</volume>
,
<fpage>2539</fpage>
<lpage>2558</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/S0042-6989(03)00458-9">doi:10.1016/S0042-6989(03)00458-9</ext-link>
)
<pub-id pub-id-type="pmid">13129541</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C17">
<label>17</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Elliott</surname>
<given-names>MT</given-names>
</name>
<name>
<surname>Wing</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Welchman</surname>
<given-names>AE</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Multisensory cues improve sensorimotor synchronisation</article-title>
.
<source>Eur. J. Neurosci.</source>
<volume>31</volume>
,
<fpage>1828</fpage>
<lpage>1835</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1460-9568.2010.07205.x">doi:10.1111/j.1460-9568.2010.07205.x</ext-link>
)
<pub-id pub-id-type="pmid">20584187</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C18">
<label>18</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Elliott</surname>
<given-names>MT</given-names>
</name>
<name>
<surname>Welchman</surname>
<given-names>AE</given-names>
</name>
<name>
<surname>Wing</surname>
<given-names>AM</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>M
<sc>at</sc>
TAP: a M
<sc>atlab</sc>
toolbox for the control and analysis of movement synchronisation experiments</article-title>
.
<source>J. Neurosci. Methods</source>
<volume>177</volume>
,
<fpage>250</fpage>
<lpage>257</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.jneumeth.2008.10.002">doi:10.1016/j.jneumeth.2008.10.002</ext-link>
)
<pub-id pub-id-type="pmid">18977388</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C19">
<label>19</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peters</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>1980</year>
<article-title>Why the preferred hand taps more quickly than the non-preferred hand: three experiments on handedness</article-title>
.
<source>Can. J. Psychol.</source>
<volume>34</volume>
,
<fpage>62</fpage>
<lpage>71</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1037/h0081014">doi:10.1037/h0081014</ext-link>
)</mixed-citation>
</ref>
<ref id="RSPB20140751C20">
<label>20</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grotowski</surname>
<given-names>ZIB</given-names>
</name>
<name>
<surname>Botev</surname>
<given-names>JF</given-names>
</name>
<name>
<surname>Kroese</surname>
<given-names>DP</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Kernel density estimation via diffusion</article-title>
.
<source>Ann. Stat.</source>
<volume>38</volume>
,
<fpage>2916</fpage>
<lpage>2957</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1214/10-AOS799">doi:10.1214/10-AOS799</ext-link>
)</mixed-citation>
</ref>
<ref id="RSPB20140751C21">
<label>21</label>
<mixed-citation publication-type="other">
<collab>Matlab</collab>
<year>2012</year>
<comment>The Mathworks Inc., MA, USA. See
<uri xlink:type="simple" xlink:href="http://www.mathworks.com">http://www.mathworks.com</uri>
</comment>
</mixed-citation>
</ref>
<ref id="RSPB20140751C22">
<label>22</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Vorberg</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Wing</surname>
<given-names>AM</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Modeling variability and dependence in timing</article-title>
. In
<source>Handbook of perception and action</source>
(eds
<person-group person-group-type="editor">
<name>
<surname>Heuer</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Keele</surname>
<given-names>S</given-names>
</name>
</person-group>
), pp.
<fpage>181</fpage>
<lpage>262</lpage>
<publisher-loc>London, UK</publisher-loc>
:
<publisher-name>Academic Press</publisher-name>
</mixed-citation>
</ref>
<ref id="RSPB20140751C23">
<label>23</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vorberg</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Schulze</surname>
<given-names>HH</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Linear phase-correction in synchronization: predictions, parameter estimation, and simulations</article-title>
.
<source>J .Math. Psychol.</source>
<volume>46</volume>
,
<fpage>56</fpage>
<lpage>87</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1006/jmps.2001.1375">doi:10.1006/jmps.2001.1375</ext-link>
)</mixed-citation>
</ref>
<ref id="RSPB20140751C24">
<label>24</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Repp</surname>
<given-names>BH</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Sensorimotor synchronization: a review of the tapping literature</article-title>
.
<source>Psychon. Bull. Rev.</source>
<volume>12</volume>
,
<fpage>969</fpage>
<lpage>992</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/BF03206433">doi:10.3758/BF03206433</ext-link>
)
<pub-id pub-id-type="pmid">16615317</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C25">
<label>25</label>
<mixed-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Oldenhuis</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2009</year>
<comment>GODLIKE: a robust single-& multi-objective optimizer</comment>
See
<comment>
<uri xlink:type="simple" xlink:href="http://www.mathworks.co.uk/matlabcentral/fileexchange/24838-godlike-a-robust-single-multi-objective-optimizer">http://www.mathworks.co.uk/matlabcentral/fileexchange/24838-godlike-a-robust-single-multi-objective-optimizer</uri>
</comment>
(
<comment>accessed on 19 April 2013</comment>
).</mixed-citation>
</ref>
<ref id="RSPB20140751C26">
<label>26</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schwarz</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>1978</year>
<article-title>Estimating the dimension of a model</article-title>
.
<source>Ann. Stat.</source>
<volume>6</volume>
,
<fpage>461</fpage>
<lpage>464</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1214/aos/1176344136">doi:10.1214/aos/1176344136</ext-link>
)</mixed-citation>
</ref>
<ref id="RSPB20140751C27">
<label>27</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stocker</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>EP</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Noise characteristics and prior expectations in human visual speed perception</article-title>
.
<source>Nat. Neurosci.</source>
<volume>9</volume>
,
<fpage>578</fpage>
<lpage>585</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nn1669">doi:10.1038/nn1669</ext-link>
)
<pub-id pub-id-type="pmid">16547513</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C28">
<label>28</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Repp</surname>
<given-names>BH</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>Y-H</given-names>
</name>
</person-group>
<year>2013</year>
<article-title>Sensorimotor synchronization: a review of recent research (2006–2012)</article-title>
.
<source>Psychon. Bull. Rev.</source>
<volume>20</volume>
,
<fpage>403</fpage>
<lpage>452</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/s13423-012-0371-2">doi:10.3758/s13423-012-0371-2</ext-link>
)
<pub-id pub-id-type="pmid">23397235</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C29">
<label>29</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wing</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Doumas</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Welchman</surname>
<given-names>AE</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Combining multisensory temporal information for movement synchronisation</article-title>
.
<source>Exp. Brain Res.</source>
<volume>200</volume>
,
<fpage>277</fpage>
<lpage>282</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-009-2134-5">doi:10.1007/s00221-009-2134-5</ext-link>
)
<pub-id pub-id-type="pmid">20039025</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C30">
<label>30</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Elliott</surname>
<given-names>MT</given-names>
</name>
<name>
<surname>Wing</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Welchman</surname>
<given-names>AE</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>The effect of ageing on multisensory integration for the control of movement timing</article-title>
.
<source>Exp. Brain Res.</source>
<volume>213</volume>
,
<fpage>291</fpage>
<lpage>298</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-011-2740-x">doi:10.1007/s00221-011-2740-x</ext-link>
)
<pub-id pub-id-type="pmid">21688143</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C31">
<label>31</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hanson</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Heron</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Whitaker</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Recalibration of perceived time across sensory modalities</article-title>
.
<source>Exp. Brain Res.</source>
<volume>185</volume>
,
<fpage>347</fpage>
<lpage>352</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-008-1282-3">doi:10.1007/s00221-008-1282-3</ext-link>
)
<pub-id pub-id-type="pmid">18236035</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C32">
<label>32</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roach</surname>
<given-names>NW</given-names>
</name>
<name>
<surname>Heron</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Whitaker</surname>
<given-names>D</given-names>
</name>
<name>
<surname>McGraw</surname>
<given-names>PV</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Asynchrony adaptation reveals neural population code for audio-visual timing</article-title>
.
<source>Proc. R. Soc. B</source>
<volume>278</volume>
,
<fpage>1314</fpage>
<lpage>1322</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1098/rspb.2010.1737">doi:10.1098/rspb.2010.1737</ext-link>
)</mixed-citation>
</ref>
<ref id="RSPB20140751C33">
<label>33</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teki</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Grube</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Griffiths</surname>
<given-names>TD</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Distinct neural substrates of duration-based and beat-based auditory timing</article-title>
.
<source>J. Neurosci.</source>
<volume>31</volume>
,
<fpage>3805</fpage>
<lpage>3812</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.5561-10.2011">doi:10.1523/JNEUROSCI.5561-10.2011</ext-link>
)
<pub-id pub-id-type="pmid">21389235</pub-id>
</mixed-citation>
</ref>
<ref id="RSPB20140751C34">
<label>34</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Coull</surname>
<given-names>JT</given-names>
</name>
<name>
<surname>Davranche</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Nazarian</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Vidal</surname>
<given-names>F</given-names>
</name>
</person-group>
<year>2013</year>
<article-title>Functional anatomy of timing differs for production versus prediction of time intervals</article-title>
.
<source>Neuropsychologia</source>
<volume>51</volume>
,
<fpage>309</fpage>
<lpage>319</lpage>
(
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuropsychologia.2012.08.017">doi:10.1016/j.neuropsychologia.2012.08.017</ext-link>
)
<pub-id pub-id-type="pmid">22964490</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Royaume-Uni</li>
</country>
</list>
<tree>
<country name="Royaume-Uni">
<noRegion>
<name sortKey="Elliott, Mark T" sort="Elliott, Mark T" uniqKey="Elliott M" first="Mark T." last="Elliott">Mark T. Elliott</name>
</noRegion>
<name sortKey="Welchman, Andrew E" sort="Welchman, Andrew E" uniqKey="Welchman A" first="Andrew E." last="Welchman">Andrew E. Welchman</name>
<name sortKey="Wing, Alan M" sort="Wing, Alan M" uniqKey="Wing A" first="Alan M." last="Wing">Alan M. Wing</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003017 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 003017 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:4046422
   |texte=   Moving in time: Bayesian causal inference explains movement coordination to auditory beats
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:24850915" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024