Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Unimodal and cross-modal prediction is enhanced in musicians

Identifieur interne : 000010 ( Pmc/Checkpoint ); précédent : 000009; suivant : 000011

Unimodal and cross-modal prediction is enhanced in musicians

Auteurs : Eliana Vassena [Belgique] ; Katty Kochman [Belgique] ; Julie Latomme [Belgique] ; Tom Verguts [Belgique]

Source :

RBID : PMC:4855230

Abstract

Musical training involves exposure to complex auditory and visual stimuli, memorization of elaborate sequences, and extensive motor rehearsal. It has been hypothesized that such multifaceted training may be associated with differences in basic cognitive functions, such as prediction, potentially translating to a facilitation in expert musicians. Moreover, such differences might generalize to non-auditory stimuli. This study was designed to test both hypotheses. We implemented a cross-modal attentional cueing task with auditory and visual stimuli, where a target was preceded by compatible or incompatible cues in mainly compatible (80% compatible, predictable) or random blocks (50% compatible, unpredictable). This allowed for the testing of prediction skills in musicians and controls. Musicians showed increased sensitivity to the statistical structure of the block, expressed as advantage for compatible trials (disadvantage for incompatible trials), but only in the mainly compatible (predictable) blocks. Controls did not show this pattern. The effect held within modalities (auditory, visual), across modalities, and when controlling for short-term memory capacity. These results reveal a striking enhancement in cross-modal prediction in musicians in a very basic cognitive task.


Url:
DOI: 10.1038/srep25225
PubMed: 27142627
PubMed Central: 4855230


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4855230

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Unimodal and cross-modal prediction is enhanced in musicians</title>
<author>
<name sortKey="Vassena, Eliana" sort="Vassena, Eliana" uniqKey="Vassena E" first="Eliana" last="Vassena">Eliana Vassena</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Department of Experimental Psychology, Ghent University</institution>
,
<country>Belgium</country>
</nlm:aff>
<country xml:lang="fr">Belgique</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Kochman, Katty" sort="Kochman, Katty" uniqKey="Kochman K" first="Katty" last="Kochman">Katty Kochman</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Institute for Psychoacoustics and Electronic Music</institution>
, Ghent University,
<country>Belgium</country>
</nlm:aff>
<country xml:lang="fr">Belgique</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Latomme, Julie" sort="Latomme, Julie" uniqKey="Latomme J" first="Julie" last="Latomme">Julie Latomme</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Department of Experimental Psychology, Ghent University</institution>
,
<country>Belgium</country>
</nlm:aff>
<country xml:lang="fr">Belgique</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Verguts, Tom" sort="Verguts, Tom" uniqKey="Verguts T" first="Tom" last="Verguts">Tom Verguts</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Department of Experimental Psychology, Ghent University</institution>
,
<country>Belgium</country>
</nlm:aff>
<country xml:lang="fr">Belgique</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">27142627</idno>
<idno type="pmc">4855230</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4855230</idno>
<idno type="RBID">PMC:4855230</idno>
<idno type="doi">10.1038/srep25225</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000656</idno>
<idno type="wicri:Area/Pmc/Curation">000656</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000010</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Unimodal and cross-modal prediction is enhanced in musicians</title>
<author>
<name sortKey="Vassena, Eliana" sort="Vassena, Eliana" uniqKey="Vassena E" first="Eliana" last="Vassena">Eliana Vassena</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Department of Experimental Psychology, Ghent University</institution>
,
<country>Belgium</country>
</nlm:aff>
<country xml:lang="fr">Belgique</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Kochman, Katty" sort="Kochman, Katty" uniqKey="Kochman K" first="Katty" last="Kochman">Katty Kochman</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Institute for Psychoacoustics and Electronic Music</institution>
, Ghent University,
<country>Belgium</country>
</nlm:aff>
<country xml:lang="fr">Belgique</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Latomme, Julie" sort="Latomme, Julie" uniqKey="Latomme J" first="Julie" last="Latomme">Julie Latomme</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Department of Experimental Psychology, Ghent University</institution>
,
<country>Belgium</country>
</nlm:aff>
<country xml:lang="fr">Belgique</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Verguts, Tom" sort="Verguts, Tom" uniqKey="Verguts T" first="Tom" last="Verguts">Tom Verguts</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Department of Experimental Psychology, Ghent University</institution>
,
<country>Belgium</country>
</nlm:aff>
<country xml:lang="fr">Belgique</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Scientific Reports</title>
<idno type="eISSN">2045-2322</idno>
<imprint>
<date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Musical training involves exposure to complex auditory and visual stimuli, memorization of elaborate sequences, and extensive motor rehearsal. It has been hypothesized that such multifaceted training may be associated with differences in basic cognitive functions, such as prediction, potentially translating to a facilitation in expert musicians. Moreover, such differences might generalize to non-auditory stimuli. This study was designed to test both hypotheses. We implemented a cross-modal attentional cueing task with auditory and visual stimuli, where a target was preceded by compatible or incompatible cues in mainly compatible (80% compatible, predictable) or random blocks (50% compatible, unpredictable). This allowed for the testing of prediction skills in musicians and controls. Musicians showed increased sensitivity to the statistical structure of the block, expressed as advantage for compatible trials (disadvantage for incompatible trials), but only in the mainly compatible (predictable) blocks. Controls did not show this pattern. The effect held within modalities (auditory, visual), across modalities, and when controlling for short-term memory capacity. These results reveal a striking enhancement in cross-modal prediction in musicians in a very basic cognitive task.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Cross, I" uniqKey="Cross I">I. Cross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mithen, S" uniqKey="Mithen S">S. Mithen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Savage, P E" uniqKey="Savage P">P. E. Savage</name>
</author>
<author>
<name sortKey="Brown, S" uniqKey="Brown S">S. Brown</name>
</author>
<author>
<name sortKey="Sakai, E" uniqKey="Sakai E">E. Sakai</name>
</author>
<author>
<name sortKey="Currie, T E" uniqKey="Currie T">T. E. Currie</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vuust, P" uniqKey="Vuust P">P. Vuust</name>
</author>
<author>
<name sortKey="Gebauer, L K" uniqKey="Gebauer L">L. K. Gebauer</name>
</author>
<author>
<name sortKey="Witek, M A G" uniqKey="Witek M">M. A. G. Witek</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maes, P J" uniqKey="Maes P">P.-J. Maes</name>
</author>
<author>
<name sortKey="Leman, M" uniqKey="Leman M">M. Leman</name>
</author>
<author>
<name sortKey="Palmer, C" uniqKey="Palmer C">C. Palmer</name>
</author>
<author>
<name sortKey="Wanderley, M M" uniqKey="Wanderley M">M. M. Wanderley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Chen, J L" uniqKey="Chen J">J. L. Chen</name>
</author>
<author>
<name sortKey="Penhune, V B" uniqKey="Penhune V">V. B. Penhune</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R" uniqKey="Zatorre R">R. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Salimpoor, V N" uniqKey="Salimpoor V">V. N. Salimpoor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zuk, J" uniqKey="Zuk J">J. Zuk</name>
</author>
<author>
<name sortKey="Benjamin, C" uniqKey="Benjamin C">C. Benjamin</name>
</author>
<author>
<name sortKey="Kenyon, A" uniqKey="Kenyon A">A. Kenyon</name>
</author>
<author>
<name sortKey="Gaab, N" uniqKey="Gaab N">N. Gaab</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Herholz, S C" uniqKey="Herholz S">S. C. Herholz</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hodges, D A" uniqKey="Hodges D">D. A. Hodges</name>
</author>
<author>
<name sortKey="Hairston, W D" uniqKey="Hairston W">W. D. Hairston</name>
</author>
<author>
<name sortKey="Burdette, J H" uniqKey="Burdette J">J. H. Burdette</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carey, D" uniqKey="Carey D">D. Carey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N. Kraus</name>
</author>
<author>
<name sortKey="Chandrasekaran, B" uniqKey="Chandrasekaran B">B. Chandrasekaran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lim, A" uniqKey="Lim A">A. Lim</name>
</author>
<author>
<name sortKey="Sinnett, S" uniqKey="Sinnett S">S. Sinnett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moreno, S" uniqKey="Moreno S">S. Moreno</name>
</author>
<author>
<name sortKey="Bidelman, G M" uniqKey="Bidelman G">G. M. Bidelman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rauscher, F H" uniqKey="Rauscher F">F. H. Rauscher</name>
</author>
<author>
<name sortKey="Shaw, G L" uniqKey="Shaw G">G. L. Shaw</name>
</author>
<author>
<name sortKey="Ky, K N" uniqKey="Ky K">K. N. Ky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schellenberg, E G" uniqKey="Schellenberg E">E. G. Schellenberg</name>
</author>
<author>
<name sortKey="Macdonald, R" uniqKey="Macdonald R">R. MacDonald</name>
</author>
<author>
<name sortKey="Kreuz, G" uniqKey="Kreuz G">G. Kreuz</name>
</author>
<author>
<name sortKey="Mitchell, L" uniqKey="Mitchell L">L. Mitchell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aagten Murphy, D" uniqKey="Aagten Murphy D">D. Aagten-Murphy</name>
</author>
<author>
<name sortKey="Cappagli, G" uniqKey="Cappagli G">G. Cappagli</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bishop, L" uniqKey="Bishop L">L. Bishop</name>
</author>
<author>
<name sortKey="Goebl, W" uniqKey="Goebl W">W. Goebl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Helmbold, N" uniqKey="Helmbold N">N. Helmbold</name>
</author>
<author>
<name sortKey="Rammsayer, T" uniqKey="Rammsayer T">T. Rammsayer</name>
</author>
<author>
<name sortKey="Altenmuller, E" uniqKey="Altenmuller E">E. Altenmüller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patston, L L" uniqKey="Patston L">L. L. Patston</name>
</author>
<author>
<name sortKey="Hogg, S L" uniqKey="Hogg S">S. L. Hogg</name>
</author>
<author>
<name sortKey="Tippett, L J" uniqKey="Tippett L">L. J. Tippett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moreno, S" uniqKey="Moreno S">S. Moreno</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Strait, D L" uniqKey="Strait D">D. L. Strait</name>
</author>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N. Kraus</name>
</author>
<author>
<name sortKey="Parbery Clark, A" uniqKey="Parbery Clark A">A. Parbery-Clark</name>
</author>
<author>
<name sortKey="Ashley, R" uniqKey="Ashley R">R. Ashley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tierney, A T" uniqKey="Tierney A">A. T. Tierney</name>
</author>
<author>
<name sortKey="Bergeson Dana, T R" uniqKey="Bergeson Dana T">T. R. Bergeson-Dana</name>
</author>
<author>
<name sortKey="Pisoni, D B" uniqKey="Pisoni D">D. B. Pisoni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, H" uniqKey="Lee H">H. Lee</name>
</author>
<author>
<name sortKey="Noppeney, U" uniqKey="Noppeney U">U. Noppeney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Friston, K" uniqKey="Friston K">K. Friston</name>
</author>
<author>
<name sortKey="Kiebel, S" uniqKey="Kiebel S">S. Kiebel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, H" uniqKey="Lee H">H. Lee</name>
</author>
<author>
<name sortKey="Noppeney, U" uniqKey="Noppeney U">U. Noppeney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Denouden, H E M D" uniqKey="Denouden H">H. E. M. D. DenOuden</name>
</author>
<author>
<name sortKey="Friston, K J" uniqKey="Friston K">K. J. Friston</name>
</author>
<author>
<name sortKey="Daw, N D" uniqKey="Daw N">N. D. Daw</name>
</author>
<author>
<name sortKey="Mcintosh, A R" uniqKey="Mcintosh A">A. R. McIntosh</name>
</author>
<author>
<name sortKey="Stephan, K E" uniqKey="Stephan K">K. E. Stephan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vassena, E" uniqKey="Vassena E">E. Vassena</name>
</author>
<author>
<name sortKey="Krebs, R M" uniqKey="Krebs R">R. M. Krebs</name>
</author>
<author>
<name sortKey="Silvetti, M" uniqKey="Silvetti M">M. Silvetti</name>
</author>
<author>
<name sortKey="Fias, W" uniqKey="Fias W">W. Fias</name>
</author>
<author>
<name sortKey="Verguts, T" uniqKey="Verguts T">T. Verguts</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clark, A" uniqKey="Clark A">A. Clark</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Summerfield, C" uniqKey="Summerfield C">C. Summerfield</name>
</author>
<author>
<name sortKey="Egner, T" uniqKey="Egner T">T. Egner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vuust, P" uniqKey="Vuust P">P. Vuust</name>
</author>
<author>
<name sortKey="Witek, M A G" uniqKey="Witek M">M. A. G. Witek</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maes, P J" uniqKey="Maes P">P.-J. Maes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schaefer, R S" uniqKey="Schaefer R">R. S. Schaefer</name>
</author>
<author>
<name sortKey="Overy, K" uniqKey="Overy K">K. Overy</name>
</author>
<author>
<name sortKey="Nelson, P" uniqKey="Nelson P">P. Nelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuchenbuch, A" uniqKey="Kuchenbuch A">A. Kuchenbuch</name>
</author>
<author>
<name sortKey="Paraskevopoulos, E" uniqKey="Paraskevopoulos E">E. Paraskevopoulos</name>
</author>
<author>
<name sortKey="Herholz, S C" uniqKey="Herholz S">S. C. Herholz</name>
</author>
<author>
<name sortKey="Pantev, C" uniqKey="Pantev C">C. Pantev</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oechslin, M S" uniqKey="Oechslin M">M. S. Oechslin</name>
</author>
<author>
<name sortKey="Van De Ville, D" uniqKey="Van De Ville D">D. Van De Ville</name>
</author>
<author>
<name sortKey="Lazeyras, F" uniqKey="Lazeyras F">F. Lazeyras</name>
</author>
<author>
<name sortKey="Hauert, C A" uniqKey="Hauert C">C.-A. Hauert</name>
</author>
<author>
<name sortKey="James, C E" uniqKey="James C">C. E. James</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vuust, P" uniqKey="Vuust P">P. Vuust</name>
</author>
<author>
<name sortKey="Ostergaard, L" uniqKey="Ostergaard L">L. Ostergaard</name>
</author>
<author>
<name sortKey="Pallesen, K J" uniqKey="Pallesen K">K. J. Pallesen</name>
</author>
<author>
<name sortKey="Bailey, C" uniqKey="Bailey C">C. Bailey</name>
</author>
<author>
<name sortKey="Roepstorff, A" uniqKey="Roepstorff A">A. Roepstorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maidhof, C" uniqKey="Maidhof C">C. Maidhof</name>
</author>
<author>
<name sortKey="Rieger, M" uniqKey="Rieger M">M. Rieger</name>
</author>
<author>
<name sortKey="Prinz, W" uniqKey="Prinz W">W. Prinz</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilcox, R R" uniqKey="Wilcox R">R. R. Wilcox</name>
</author>
<author>
<name sortKey="Keselman, H J" uniqKey="Keselman H">H. J. Keselman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fischer, R" uniqKey="Fischer R">R. Fischer</name>
</author>
<author>
<name sortKey="Dreisbach, G" uniqKey="Dreisbach G">G. Dreisbach</name>
</author>
<author>
<name sortKey="Goschke, T" uniqKey="Goschke T">T. Goschke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schmidhuber, J" uniqKey="Schmidhuber J">J. Schmidhuber</name>
</author>
<author>
<name sortKey="Pezzulo, G" uniqKey="Pezzulo G">G. Pezzulo</name>
</author>
<author>
<name sortKey="Butz, M V" uniqKey="Butz M">M. V. Butz</name>
</author>
<author>
<name sortKey="Sigaud, O" uniqKey="Sigaud O">O. Sigaud</name>
</author>
<author>
<name sortKey="Baldassarre, G" uniqKey="Baldassarre G">G. Baldassarre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, A D" uniqKey="Patel A">A. D. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cameron, D J" uniqKey="Cameron D">D. J. Cameron</name>
</author>
<author>
<name sortKey="Grahn, J A" uniqKey="Grahn J">J. A. Grahn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="O Oherty, J P" uniqKey="O Oherty J">J. P. O’Doherty</name>
</author>
<author>
<name sortKey="Dayan, P" uniqKey="Dayan P">P. Dayan</name>
</author>
<author>
<name sortKey="Friston, K" uniqKey="Friston K">K. Friston</name>
</author>
<author>
<name sortKey="Critchley, H" uniqKey="Critchley H">H. Critchley</name>
</author>
<author>
<name sortKey="Dolan, R J" uniqKey="Dolan R">R. J. Dolan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wacongne, C" uniqKey="Wacongne C">C. Wacongne</name>
</author>
<author>
<name sortKey="Changeux, J P" uniqKey="Changeux J">J.-P. Changeux</name>
</author>
<author>
<name sortKey="Dehaene, S" uniqKey="Dehaene S">S. Dehaene</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Daneman, M" uniqKey="Daneman M">M. Daneman</name>
</author>
<author>
<name sortKey="Carpenter, P A" uniqKey="Carpenter P">P. A. Carpenter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Corsi, P" uniqKey="Corsi P">P. Corsi</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sci Rep</journal-id>
<journal-id journal-id-type="iso-abbrev">Sci Rep</journal-id>
<journal-title-group>
<journal-title>Scientific Reports</journal-title>
</journal-title-group>
<issn pub-type="epub">2045-2322</issn>
<publisher>
<publisher-name>Nature Publishing Group</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">27142627</article-id>
<article-id pub-id-type="pmc">4855230</article-id>
<article-id pub-id-type="pii">srep25225</article-id>
<article-id pub-id-type="doi">10.1038/srep25225</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Unimodal and cross-modal prediction is enhanced in musicians</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Vassena</surname>
<given-names>Eliana</given-names>
</name>
<xref ref-type="corresp" rid="c1">a</xref>
<xref ref-type="aff" rid="a1">1</xref>
<xref ref-type="author-notes" rid="n1">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kochman</surname>
<given-names>Katty</given-names>
</name>
<xref ref-type="aff" rid="a2">2</xref>
<xref ref-type="author-notes" rid="n1">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Latomme</surname>
<given-names>Julie</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Verguts</surname>
<given-names>Tom</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<aff id="a1">
<label>1</label>
<institution>Department of Experimental Psychology, Ghent University</institution>
,
<country>Belgium</country>
</aff>
<aff id="a2">
<label>2</label>
<institution>Institute for Psychoacoustics and Electronic Music</institution>
, Ghent University,
<country>Belgium</country>
</aff>
</contrib-group>
<author-notes>
<corresp id="c1">
<label>a</label>
<email>eliana.vassena@ugent.be</email>
</corresp>
<fn id="n1">
<label>*</label>
<p>These authors contributed equally to this work.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>04</day>
<month>05</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>6</volume>
<elocation-id>25225</elocation-id>
<history>
<date date-type="received">
<day>14</day>
<month>12</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>06</day>
<month>04</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2016, Macmillan Publishers Limited</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Macmillan Publishers Limited</copyright-holder>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<pmc-comment>author-paid</pmc-comment>
<license-p>This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
</license-p>
</license>
</permissions>
<abstract>
<p>Musical training involves exposure to complex auditory and visual stimuli, memorization of elaborate sequences, and extensive motor rehearsal. It has been hypothesized that such multifaceted training may be associated with differences in basic cognitive functions, such as prediction, potentially translating to a facilitation in expert musicians. Moreover, such differences might generalize to non-auditory stimuli. This study was designed to test both hypotheses. We implemented a cross-modal attentional cueing task with auditory and visual stimuli, where a target was preceded by compatible or incompatible cues in mainly compatible (80% compatible, predictable) or random blocks (50% compatible, unpredictable). This allowed for the testing of prediction skills in musicians and controls. Musicians showed increased sensitivity to the statistical structure of the block, expressed as advantage for compatible trials (disadvantage for incompatible trials), but only in the mainly compatible (predictable) blocks. Controls did not show this pattern. The effect held within modalities (auditory, visual), across modalities, and when controlling for short-term memory capacity. These results reveal a striking enhancement in cross-modal prediction in musicians in a very basic cognitive task.</p>
</abstract>
</article-meta>
</front>
<body>
<p>Music is a universal attribute to all human cultures and pervasive in daily life
<xref ref-type="bibr" rid="b1">1</xref>
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
. Advanced musical practice involves skills in processing various kinds of stimuli. First, musicians are highly trained in the memorization of auditory stimuli, from simple tones to complex rhythms and harmonic structures
<xref ref-type="bibr" rid="b4">4</xref>
. Second, musicians read symbolic visual stimuli associated with those tones. Third, musicians produce tones by performing automatized, refined actions involving haptic feedback in coordination with their environment
<xref ref-type="bibr" rid="b5">5</xref>
<xref ref-type="bibr" rid="b6">6</xref>
. These components are combined in a rapidly evolving processing stream, and yet organized in a meaningful sequence, which produces the pleasant stimulus the listener perceives as music
<xref ref-type="bibr" rid="b7">7</xref>
<xref ref-type="bibr" rid="b8">8</xref>
.</p>
<p>The complexity of musical training suggests that consistent exposure and expertise may be associated with measurable effects in several cognitive functions
<xref ref-type="bibr" rid="b9">9</xref>
, as well as on brain plasticity
<xref ref-type="bibr" rid="b10">10</xref>
. Musicians show enhanced auditory-perception skills, such as pitch discrimination, temporal order judgment
<xref ref-type="bibr" rid="b11">11</xref>
and discrimination of psychoacoustic features
<xref ref-type="bibr" rid="b12">12</xref>
. Moreover, structural and functional changes in brain regions dedicated to auditory processing have been consistently reported
<xref ref-type="bibr" rid="b13">13</xref>
. These auditory-perceptual advantages suggest that musical expertise is associated with improved temporal discrimination and attentional capacity
<xref ref-type="bibr" rid="b14">14</xref>
.</p>
<p>The generalization of these benefits to other cognitive functions remains debated
<xref ref-type="bibr" rid="b15">15</xref>
. A popular study reported that exposure to a 10-minute fragment by Mozart improved spatial reasoning
<xref ref-type="bibr" rid="b16">16</xref>
. This result had great media resonance and was named “Mozart effect” by the press, conveying the idea that classical music could in fact improve cognitive skills. Although not consistently replicated
<xref ref-type="bibr" rid="b17">17</xref>
, this result stimulated further research testing whether musical training provides benefits beyond auditory perception, yielding controversial results.</p>
<p>On the one hand, several studies have reported advantages for musicians in diverse cognitive domains. Musicians have shown better reproduction of time intervals for both auditory and visual intervals
<xref ref-type="bibr" rid="b18">18</xref>
, as well as better reproduction of multimodal sequences
<xref ref-type="bibr" rid="b12">12</xref>
. Musicians have also performed better in judging whether auditory and visual information was presented synchronously or asynchronously in musical videoclips
<xref ref-type="bibr" rid="b19">19</xref>
. Musicians also outperform controls in attention and visuo-spatial tasks such as detecting single elements in complex objects, detecting letters among digits
<xref ref-type="bibr" rid="b20">20</xref>
, and line bisection
<xref ref-type="bibr" rid="b21">21</xref>
. Finally, children exposed to 9-months of musical training have shown increased reading abilities as well as pitch discrimination in speech
<xref ref-type="bibr" rid="b22">22</xref>
. These findings suggest a cross-modal transfer of benefits for musicians beyond the musical domain and beyond the auditory modality.</p>
<p>On the other hand, several studies have found selective benefits to musical and auditory processing, with no generalization to other modalities. For example, advantages in attentional performance were reported only for auditory, but not for visual attention tasks
<xref ref-type="bibr" rid="b12">12</xref>
<xref ref-type="bibr" rid="b23">23</xref>
. Also, musicians proved selectively better at reproducing auditory sequences but not audio-visual sequences
<xref ref-type="bibr" rid="b24">24</xref>
. Additionally, no advantage was reported in learning sequence structure after passive listening. Lastly, musicians showed improved detection of audio-visual asynchrony but only with music and not with speech
<xref ref-type="bibr" rid="b25">25</xref>
. However, an important caveat is that most of these studies present correlational evidence, showing better performance in musicians compared to controls. Although informative, one cannot infer a causal influence of musical training on cognitive skills (be it selective to the auditory and musical domain or more general). Causal evidence remains sparse in the literature and should receive further attention in future research.</p>
<p>This overview suggests auditory processing benefits, possibly deriving from the extensive expertise that musicians acquire in fast-scale temporal processing. However, the evidence for advantages beyond the auditory domain remains varied.</p>
<p>A potential benefit on the core cognitive process of prediction has been hypothesized, but not directly tested. Prominent cognitive theories such as predictive coding and reinforcement learning suggest that cognitive processing proceeds by prediction
<xref ref-type="bibr" rid="b26">26</xref>
<xref ref-type="bibr" rid="b27">27</xref>
<xref ref-type="bibr" rid="b28">28</xref>
<xref ref-type="bibr" rid="b29">29</xref>
<xref ref-type="bibr" rid="b30">30</xref>
. In this framework, each stimulus or sequence leads to a prediction of the upcoming stimulus. Predicted and actual stimulus are compared, and this discrepancy is termed prediction error. Prediction errors drive both cognitive processing and learning
<xref ref-type="bibr" rid="b31">31</xref>
. Perception of musical rhythm and meter have been framed in the context of predictive theories
<xref ref-type="bibr" rid="b32">32</xref>
, as well as the relationship between perception and action in musical performance
<xref ref-type="bibr" rid="b33">33</xref>
. Error (and prediction error) minimization is the core concept shared by these accounts. Furthermore, it has been proposed that the surprise associated with prediction error carries an affective component, and an ideal amount of surprise (not too much but not too little) drives affective reaction to music, as well as guiding expert musicians in pleasing their audience
<xref ref-type="bibr" rid="b34">34</xref>
. From the empirical point of view, studies reported an advantage for musicians in detecting auditory prediction errors, both with simple sounds and complex harmonic structures
<xref ref-type="bibr" rid="b35">35</xref>
<xref ref-type="bibr" rid="b36">36</xref>
. The neural signature of deviance detection also reflects this facilitation. Rhythmic deviance also elicits error-related neural activity
<xref ref-type="bibr" rid="b37">37</xref>
. Moreover, expert musicians have shown neural correlates of error detection even before performing and incorrect action, supposedly arising from a continuous fast monitoring of predictions and outcomes
<xref ref-type="bibr" rid="b38">38</xref>
.</p>
<p>Taken together, these findings support a pivotal role of prediction in music perception and performance. An further intriguing possibility is that musical expertise might be associated with improved prediction skills. The goal of the current study was to test this hypothesis with a standard cognitive task involving basic stimulus-outcome prediction skills, outside the musical domain. Given that prediction applies to any stimulus sequence irrespective of its modalities, we hypothesized that this facilitation may extend to non-auditory (e.g. visual) and even cross-modal sequences (i.e., when an auditory stimulus is predictive of a visual one and vice-versa). The advantage should manifest as increased sensitivity to prediction errors, as a consequence of increased encoding of the statistical structure of the environment in both unimodal and cross-modal conditions. To test this, we implemented a cross-modal cueing paradigm with auditory and visual stimuli. We implemented different levels of predictability and thus prediction errors by different frequencies of compatible and incompatible cue-target pairs. Moreover, we administered two control tasks to measure verbal and visuo-spatial short-term memory. The goal was to determine whether the hypothesized difference would be specific to prediction, or simply accountable to differences in short-term memory capacity.</p>
<sec disp-level="1">
<title>Results</title>
<p>In the cross-modal cueing task, overall accuracy was 78% ± 0.4. Accuracy rates were averaged for each subject and for each condition, and subjected to a rANOVA. No significant effect of group was found (
<italic>F</italic>
<sub>(1,27)</sub>
 = 0.524,
<italic>p</italic>
 = 0.48), showing that accuracy did not differ between musicians and controls. A significant main effect of compatibility was observed (
<italic>F</italic>
<sub>(1,28)</sub>
 = 4.57,
<italic>p</italic>
 < 0.05,
<italic>η</italic>
<sup>2</sup>
 = 0.14), with higher accuracy for compatible trials (M = 0.96 ± 0.02) relative to incompatible trials (M = 0.95 ± 0.02).</p>
<p>RTs were averaged for each subject and for each condition. Error trials (4.7%) were excluded from further analysis. To minimize the impact of outliers, trials with RTs higher or lower than 2.5 standard deviations of the individual mean were also excluded (2.9%). Trimming means by removing outlying observations is a common way of making the mean a more robust measure of central tendency
<xref ref-type="bibr" rid="b39">39</xref>
, and a 2.5 standard deviations cut-off is a commonly used convention in the field
<xref ref-type="bibr" rid="b40">40</xref>
. Subsequently, we tested the assumption of normality of the residuals with the Shapiro-Wilk test. All p-values were larger than 0.05, confirming that the residuals were normally distributed. For 2 out of the 16 conditions the p-values were still rather small (0.08 and 0.06). To test the robustness of our results, we log-transformed the data and run the main analysis again. All significant main effects and interactions reported in the main analysis were preserved when tested on the log-transformed data.</p>
<p>Crucially, the rANOVA revealed a significant interaction group × compatibility frequency × compatibility (
<italic>F</italic>
<sub>(1,28)</sub>
 = 6.24,
<italic>p</italic>
 < 0.05,
<italic>η</italic>
<sup>2</sup>
 = 0.18, see
<xref ref-type="fig" rid="f1">Fig. 1</xref>
), with musicians showing a stronger influence of compatibility frequency (enhanced compatibility effect in the 80/20 condition) as compared to controls.</p>
<p>Pairwise comparisons revealed a significant difference for musicians between compatible and incompatible trials in the 80/20 condition (
<italic>t</italic>
<sub>(14)</sub>
 = −20.17,
<italic>p</italic>
 = 0.001), but not in the 50/50 condition (
<italic>t</italic>
<sub>(14)</sub>
 = −4.9,
<italic>p</italic>
 = 0.21). Thus, musicians were relatively disadvantaged for incompatible targets, but only in the 80/20 condition. Conversely, controls showed no difference between compatible and incompatible trials in the 80/20 condition (
<italic>t</italic>
<sub>(14)</sub>
 = −1.68,
<italic>p</italic>
 = 0.12) but did show a small difference between compatible and incompatible trials in the 50/50 conditions (
<italic>t</italic>
<sub>(14)</sub>
 = −2.61,
<italic>p</italic>
 = 0.02). However, only the difference for the musicians between compatible and incompatible trials in the 80/20 condition remained significant after Bonferroni correction for multiple comparisons. This interaction shows increased sensitivity in musicians to compatibility frequency, suggesting a better representation of the statistical structure of the block. This translated in an increased compatibility effect, when incompatible trials were less frequent. Crucially, the cue modality (
<italic>F</italic>
<sub>(1,28)</sub>
 = 0.01,
<italic>p</italic>
 = 0.93) or target modality (
<italic>F</italic>
<sub>(1,28)</sub>
 = 0.73,
<italic>p</italic>
 = 0.4) did not interact with the 3-way group × compatibility frequency × compatibility interaction, indicating that the effect holds across cue and target modalities.</p>
<p>Furthermore, a main effect of group was observed (
<italic>F</italic>
<sub>(1,28)</sub>
 = 10.49,
<italic>p</italic>
 < 0.01,
<italic>η</italic>
<sup>2</sup>
 = 0.27), with musicians showing overall faster RTs than controls. Pairwise comparisons across compatibility frequency and compatibility frequency conditions revealed that musicians reacted faster than controls in all conditions: to compatible targets and incompatible targets in random blocks (C
<italic>t</italic>
<sub>(28)</sub>
 = −3.39, p = 0.002, IC
<italic>t</italic>
<sub>(28)</sub>
 = −3.46,
<italic>p</italic>
 = 0.002), and to compatible and incompatible targets in mainly compatible blocks (C
<italic>t</italic>
<sub>(28)</sub>
 = −3.6, p = 0.001, IC
<italic>t</italic>
<sub>(28)</sub>
 = −2.37,
<italic>p</italic>
 = 0.025). This last comparison however, was not significant after applying a Bonferroni correction for multiple comparisons.</p>
<p>Additionally, there was a main effect of target modality (
<italic>F</italic>
<sub>(1,28)</sub>
 = 189.51,
<italic>p</italic>
 < 0.001,
<italic>η</italic>
<sup>2</sup>
 = 0.87), with faster RTs to visual targets, and a main effect of compatibility (
<italic>F</italic>
<sub>(1,28)</sub>
 = 16.61,
<italic>p</italic>
 < 0.001,
<italic>η</italic>
<sup>2</sup>
 = 0.37), with faster RTs in compatible trial. A significant group × target modality interaction was also observed (
<italic>F</italic>
<sub>(1,28)</sub>
 = 5.6,
<italic>p</italic>
 < 0.05,
<italic>η</italic>
<sup>2</sup>
 = 0.17): Musicians responded faster to visual targets compared to auditory targets (
<italic>t</italic>
<sub>(14)</sub>
 = −7.81,
<italic>p</italic>
 < 0.001, mean difference −87.6 ms); controls also responded faster to visual targets (
<italic>t</italic>
<sub>(14)</sub>
 = −11.8,
<italic>p</italic>
 < 0.001, mean difference −123.9 ms); however, the difference for controls was larger, thus driving the interaction. This interaction might reflect a facilitation for musicians in responding to auditory stimuli. Furthermore, there was a significant cue modality × target modality interaction (
<italic>F</italic>
<sub>(1,28)</sub>
 = 32.93,
<italic>p</italic>
 < 0.001,
<italic>η</italic>
<sup>2</sup>
 = 0.54), with faster RTs to visual as compared to auditory targets (
<italic>t</italic>
<sub>(29)</sub>
 = −12.79,
<italic>p</italic>
 < 0.001), but no significant difference between visual and auditory cues (
<italic>t</italic>
<sub>(29)</sub>
 = 0.21,
<italic>p</italic>
 < 0.84). A target modality × compatibility frequency interaction was also reported (
<italic>F</italic>
<sub>(1,28)</sub>
 = 5.29,
<italic>p</italic>
 < 0.05,
<italic>η</italic>
<sup>2</sup>
 = 0.16): RTs were faster for visual compared to auditory targets in both the 80/20 condition (
<italic>t</italic>
<sub>(29)</sub>
 = −12.38,
<italic>p</italic>
 < 0.001, mean difference = −101.37 ms) and 50/50 condition (
<italic>t</italic>
<sub>(29)</sub>
 = −12.56,
<italic>p</italic>
 < 0.001, mean difference = −110.18 ms), with a larger difference for the latter.</p>
<p>Subsequently, performance on the short-term memory tasks was analyzed. The overall verbal short-term memory capacity score was 74.03 ± 6.01. No significant differences were observed between musicians (M = 76.2 ± 7.54) and controls (M = 71.87 ± 9.6,
<italic>t</italic>
<sub>(28)</sub>
 = 0.34,
<italic>p</italic>
 = 0.73). The overall visuo-spatial short-term memory capacity score was 74.4 ± 4.36. No significant differences were reported between musicians (M = 79.8 ± 7.01) and controls (M = 74.4 ± 4.36,
<italic>t</italic>
<sub>(28)</sub>
 = 1.28,
<italic>p</italic>
 = 0.22).</p>
<p>Although there was no group effect in short-term memory, in order to further ensure that the RT effects in the cross-modal cueing task could not be explained by differences in short-term memory capacity, the main rANOVA on RTs was repeated, including both short-term memory scores as covariates. No significant interaction of any factor with short-term memory scores was observed. Moreover, the most relevant group × compatibility frequency × compatibility was preserved (
<italic>F</italic>
<sub>(1,28)</sub>
 = 7.07,
<italic>p</italic>
 < 0.05,
<italic>η</italic>
<sup>2</sup>
 = 0.21), showing that the core finding (
<xref ref-type="fig" rid="f1">Fig. 1</xref>
) is not driven by differences in short-term memory capacity.</p>
</sec>
<sec disp-level="1">
<title>Discussion</title>
<p>This study investigated the basic cognitive skill of prediction in musicians and non-musicians. We hypothesized an advantage for musicians in encoding predictable event sequences. The results can be summarized as follows. First, musicians showed enhanced prediction relative to controls, expressed as increased sensitivity to statistical block structure (compatibility frequency) in a very basic cueing task. Second, modulation by musical expertise held across modalities, revealing a striking cross-modal generalization. This shows increased prediction skills in musicians as compared to controls irrespective of event modality. Third, enhanced prediction could not be explained by short-term memory differences.</p>
<p>Earlier work addressed the role of prediction in music and auditory processing. One conclusion was that regular sound sequences generate predictions and prediction errors at several hierarchical levels, which can determine the pleasurableness of the sequence
<xref ref-type="bibr" rid="b41">41</xref>
. Showing better prediction skills in musicians in a very basic cueing task indicates that the prediction machinery used in musical processing is rooted in basic cognitive prediction mechanisms.</p>
<p>Additionally, we reported an overall advantage for musicians, who responded faster in all conditions and irrespective of modality (although no differences in accuracy at any of the tasks were found). Moreover, the group × target modality interaction, suggested to some extent faster processing of auditory targets. On the one hand, this results is compatible with prominent accounts stating that musical training results in fine-tuning and increased efficiency and precision of the auditory system
<xref ref-type="bibr" rid="b13">13</xref>
, also generalizing to speech
<xref ref-type="bibr" rid="b42">42</xref>
<xref ref-type="bibr" rid="b43">43</xref>
. On the other hand, we report an advantage in prediction for musicians irrespective of cue and target modality, thus advocating for a non-selective effect.</p>
<p>In conclusion, this study provides several avenues for future work. First and foremost, we did not investigate causality. Our data is correlational in nature and does not allow for the drawing of inferences on the effect of musical training on cognition. In fact, a plausible alternative interpretation could be that people with better prediction skills become interested in music as a consequence of their natural abilities, and are presumably more suited to pursue music professionally in life. In order to disentangle the origin of such differences between musicians and non-musicians future studies should administer musical training to naive subjects and measure prediction before and after training. In addition to testing for causal relationship, this would allow investigating the amount of training required to induce measurable benefits. This might be particularly relevant in the context of longitudinal studies addressing the benefits of musical education.</p>
<p>Second, we did not distinguish between types of musical expertise or years of training, variables that were relevant in earlier research
<xref ref-type="bibr" rid="b44">44</xref>
. The extent of the advantage might depend on musical instrument, as well as vary as a function of years of training.</p>
<p>Third, our sample size was rather limited. Future studies should aim for a larger amount of participants in each group to increase power and generalizability of the results.</p>
<p>Finally, the role of prediction in cognitive processing has been widely studied in computational neuroscience (e.g., reinforcement learning
<xref ref-type="bibr" rid="b45">45</xref>
, predictive coding
<xref ref-type="bibr" rid="b26">26</xref>
<xref ref-type="bibr" rid="b31">31</xref>
<xref ref-type="bibr" rid="b46">46</xref>
). Here, prediction is central in perception, learning, memory, decision-making, and action selection. It is an exciting open question to what extent training prediction skills as implemented in musical practice, may improve such domain-general abilities.</p>
</sec>
<sec disp-level="1">
<title>Methods</title>
<sec disp-level="2">
<title>Participants</title>
<p>Thirty subjects participated to the study (age range 17–33, M = 19.5, SD = 3.2), recruited among Ghent University students who earned credits for participation. The sample size was determined a priori, following earlier conventions in music research
<xref ref-type="bibr" rid="b11">11</xref>
<xref ref-type="bibr" rid="b18">18</xref>
<xref ref-type="bibr" rid="b21">21</xref>
. All participants provided written informed consent. Fifteen were selected for their musical expertise (nine males) with the following requirements: a minimum of five years of playing a musical instrument; having followed formal musical training in a music school; practicing the instrument on a daily basis; and currently playing the instrument at the time of the study. Fifteen control participants were selected (five males), where the listed requirements were exclusion criteria. Control participants and musicians did not differ in age (
<italic>t</italic>
<sub>(28)</sub>
 = 0.11,
<italic>p</italic>
 = 0.91) group. All participants gave written informed consent before participation. The experiment was conducted under the General Ethical Protocol for scientific research at the Department of Psychology and Educational Sciences of Ghent University, approved by the department’s ethical committee. The procedure was in accordance with the guidelines provided within such protocol.</p>
</sec>
<sec disp-level="2">
<title>Procedure</title>
<p>After completing the informed consent, participants performed the main task (cross-modal cueing task), followed by two control tasks measuring verbal short-term memory (verbal span task) and visuo-spatial short-term memory (Corsi block tapping task). All tasks were programmed in E-prime 2.0 (Psychology Software Tools, Pittsburgh, PA) and presented on a 15″ computer screen. Headphones were used to present auditory stimuli.</p>
</sec>
<sec disp-level="2">
<title>Cross-modal cueing task</title>
<p>Each trial started with a fixation cross (see
<xref ref-type="fig" rid="f2">Fig. 2</xref>
). A first stimulus was presented as a cue (650 ms). The cue was followed by a target (650 ms), to which the participants had to respond as quickly and as accurately as possible, with a maximum response time limit of 1650 ms. In all trials, pressing the response key terminated the trial. Cue and target could be compatible or incompatible. In auditory-auditory trials, cue and target were auditory stimuli. Each of them could be a low pitch tone (800 Hz) or high pitch tone (1600 Hz, 650 ms). At the target, participants responded by indicating if the tone was low (left key press) or high (right key press). In compatible trials, the target matched the pitch of the cue, while in incompatible trials it did not. In visual-visual trials, two squares were presented, one to the left and one to the right of the fixation cross. The cue consisted of an arrow pointing left or right appearing in the center of the screen. The target stimulus consisted of an X, appearing either in the left square or in the right square. Participants had to indicate if the target stimulus X was on the left (left key press) or on the right (right key press). In compatible trials, the X appeared in the same direction as the arrow pointed, while in incompatible trials it appeared on the opposite side. In auditory-visual trials, the auditory cue was followed by the visual target. Trials were considered compatible when a low tone was followed by an X on the left, and a high tone by an X on the right. Trials were considered incompatible when a low tone was followed by an X on the right and when a high tone was followed by an X on the left. In visual-auditory trials the visual cue was followed by the auditory target. Trials were considered compatible when an arrow pointing left was followed by a low tone and an arrow pointing right by a high tone. Trials were considered incompatible when a left-pointing arrow was followed by a high tone and a right-pointing arrow by a low tone.</p>
<p>Cue modality and target modality were manipulated across blocks, yielding AA, AV, VA and VV blocks. Each of these block types occurred with 80% compatible and 20% incompatible trials (80/20 condition, 40 trials per block, 8 of which incompatible), or with 50% compatible and 50% incompatible trials (50/50 condition, 28 trials per block). This represents the crucial manipulation, as the 80/20 condition is characterized by a statistical structure that prompts prediction. This resulted in 8 blocks, repeated 4 times each, for a total of 32 blocks and 1088 trials. The order of presentation of these blocks was randomized, as well as the order of presentation of trials within a block.</p>
<p>To summarize, the design implemented the following factors: cue modality (auditory or visual), target modality (auditory or visual), compatibility frequency in the block (80% or 50%), and compatibility in the trial (compatible or incompatible). The task lasted about 45 minutes.</p>
</sec>
<sec disp-level="2">
<title>Verbal short-term memory task</title>
<p>To control for verbal short-term memory capacity, a letter span task was administered
<xref ref-type="bibr" rid="b47">47</xref>
. At the start of every trial, a blank screen was presented (1500 ms). A sequence of 4 consonants followed, each remaining on the screen for 1200 ms (inter-letter blank of 250 ms). After a 1500 ms retention period, participants were instructed to reproduce the sequence by pressing the corresponding keys on the keyboard in the correct order. The length of the string to be retained increased by one every time that the participant correctly reported three sequences of the current length. The maximum string length was 9. Failing to reproduce a sequence for three consecutive trials would terminate the task, constituting the span length for that participant. A first practice trial with feedback was presented in the beginning. The total task duration was 5 minutes on average.</p>
</sec>
<sec disp-level="2">
<title>Visuo-spatial short-term memory task</title>
<p>To control for visuo-spatial short-term memory capacity, a Corsi block tapping task was administered to all participants
<xref ref-type="bibr" rid="b48">48</xref>
. At the start of every trial, a grid of 9 grey-colored squares (35 × 35 mm) was presented (1200 ms). Three of the squares would sequentially turn black (each for 1000 ms with 500 ms in between). After a 1000 ms blank screen, participants were asked to report the order of appearance of the squares, by clicking on the square in the order in which they were presented. Correctly reproducing the order in three trials would increase the number of squares by one. The maximum number of squares was 4. Failing to reproduce the order of presentation in three consecutive trials would terminate the task, providing the span length for that participant. A first practice trial with feedback was presented in the beginning of the task. The total task duration was on average 5 minutes.</p>
</sec>
<sec disp-level="2">
<title>Analysis</title>
<p>First, accuracy at the cross-modal cueing task was analyzed. A repeated measures analysis of variance (rANOVA) was performed on the accuracy data, with between-subjects factor group (musicians, controls), and within-subjects factors cue modality (auditory, visual), target modality (auditory, visual), compatibility frequency in the block (80/20, 50/50), and compatibility in the trial (compatible, incompatible). Second, the reaction times (RTs) of the cross-modal cueing task were analyzed. A rANOVA was conducted on this data, with between-subjects factor group (musicians, controls), and within-subjects factors cue modality (auditory, visual), target modality (auditory, visual), compatibility frequency in the block (80/20, 50/50), and compatibility in the trial (compatible, incompatible.) Third, the scores at the short-term memory tasks were computed. Following conventions from the literature, in both tasks accuracy was calculated as the longest sequence correctly reproduced, multiplied by the total number of correctly reproduced sequences. For example, in the letter span tasks, a participant who correctly reproduced a 6-letter span and overall reproduced 11 spans correctly received an accuracy score of 66. Two-sample t-tests were performed on the verbal short-term memory task scores and visuo-spatial short-term memory task to test for differences in short-term memory capacity between musicians and controls.</p>
</sec>
</sec>
<sec disp-level="1">
<title>Additional Information</title>
<p>
<bold>How to cite this article</bold>
: Vassena, E.
<italic>et al</italic>
. Unimodal and cross-modal prediction is enhanced in musicians.
<italic>Sci. Rep</italic>
.
<bold>6</bold>
, 25225; doi: 10.1038/srep25225 (2016).</p>
</sec>
</body>
<back>
<ack>
<p>We thank Esther De Loof for useful discussion, and Jean-Philippe Van Dijk for advice on the short-term memory tasks. The project was funded by Ghent University GOA grant BOF08/GOA011 and by the Ghent University Multidisciplinary Research Partnership “The integrative neuroscience of behavioral control”.</p>
</ack>
<ref-list>
<ref id="b1">
<mixed-citation publication-type="journal">
<name>
<surname>Cross</surname>
<given-names>I.</given-names>
</name>
<article-title>Music in the evolution of the mind</article-title>
.
<source>Trends Neurosci.</source>
<volume>24</volume>
,
<fpage>190</fpage>
(
<year>2001</year>
).</mixed-citation>
</ref>
<ref id="b2">
<mixed-citation publication-type="journal">
<name>
<surname>Mithen</surname>
<given-names>S.</given-names>
</name>
<article-title>Singing in the brain</article-title>
.
<source>New Sci.</source>
<volume>197</volume>
,
<fpage>38</fpage>
<lpage>39</lpage>
(
<year>2008</year>
).</mixed-citation>
</ref>
<ref id="b3">
<mixed-citation publication-type="journal">
<name>
<surname>Savage</surname>
<given-names>P. E.</given-names>
</name>
,
<name>
<surname>Brown</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Sakai</surname>
<given-names>E.</given-names>
</name>
&
<name>
<surname>Currie</surname>
<given-names>T. E.</given-names>
</name>
<article-title>Statistical universals reveal the structures and functions of human music</article-title>
.
<source>Proc. Natl. Acad. Sci. USA</source>
<volume>112</volume>
,
<fpage>8987</fpage>
<lpage>92</lpage>
(
<year>2015</year>
).
<pub-id pub-id-type="pmid">26124105</pub-id>
</mixed-citation>
</ref>
<ref id="b4">
<mixed-citation publication-type="journal">
<name>
<surname>Vuust</surname>
<given-names>P.</given-names>
</name>
,
<name>
<surname>Gebauer</surname>
<given-names>L. K.</given-names>
</name>
&
<name>
<surname>Witek</surname>
<given-names>M. A. G.</given-names>
</name>
<article-title>Neural underpinnings of music: the polyrhythmic brain</article-title>
.
<source>Adv. Exp. Med. Biol.</source>
<volume>829</volume>
,
<fpage>339</fpage>
<lpage>356</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">25358719</pub-id>
</mixed-citation>
</ref>
<ref id="b5">
<mixed-citation publication-type="journal">
<name>
<surname>Maes</surname>
<given-names>P.-J.</given-names>
</name>
,
<name>
<surname>Leman</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Palmer</surname>
<given-names>C.</given-names>
</name>
&
<name>
<surname>Wanderley</surname>
<given-names>M. M.</given-names>
</name>
<article-title>Action-based effects on music perception</article-title>
.
<source>Front. Psychol.</source>
<volume>4</volume>
,
<fpage>1008</fpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24454299</pub-id>
</mixed-citation>
</ref>
<ref id="b6">
<mixed-citation publication-type="journal">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
,
<name>
<surname>Chen</surname>
<given-names>J. L.</given-names>
</name>
&
<name>
<surname>Penhune</surname>
<given-names>V. B.</given-names>
</name>
<article-title>When the brain plays music: auditory-motor interactions in music perception and production</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>8</volume>
,
<fpage>547</fpage>
<lpage>558</lpage>
(
<year>2007</year>
).
<pub-id pub-id-type="pmid">17585307</pub-id>
</mixed-citation>
</ref>
<ref id="b7">
<mixed-citation publication-type="journal">
<name>
<surname>Zatorre</surname>
<given-names>R.</given-names>
</name>
<article-title>Music, the food of neuroscience?</article-title>
<source>Nature</source>
<volume>434</volume>
,
<fpage>312</fpage>
<lpage>315</lpage>
(
<year>2005</year>
).
<pub-id pub-id-type="pmid">15772648</pub-id>
</mixed-citation>
</ref>
<ref id="b8">
<mixed-citation publication-type="journal">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
&
<name>
<surname>Salimpoor</surname>
<given-names>V. N.</given-names>
</name>
<article-title>From perception to pleasure: music and its neural substrates</article-title>
.
<source>Proc. Natl. Acad. Sci.</source>
<volume>110</volume>
,
<fpage>10430</fpage>
<lpage>10437</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23754373</pub-id>
</mixed-citation>
</ref>
<ref id="b9">
<mixed-citation publication-type="journal">
<name>
<surname>Zuk</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Benjamin</surname>
<given-names>C.</given-names>
</name>
,
<name>
<surname>Kenyon</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Gaab</surname>
<given-names>N.</given-names>
</name>
<article-title>Behavioral and neural correlates of executive functioning in musicians and non-musicians</article-title>
.
<source>PloS One</source>
<volume>9</volume>
,
<fpage>e99868</fpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24937544</pub-id>
</mixed-citation>
</ref>
<ref id="b10">
<mixed-citation publication-type="journal">
<name>
<surname>Herholz</surname>
<given-names>S. C.</given-names>
</name>
&
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<article-title>Musical Training as a Framework for Brain Plasticity: Behavior, Function, and Structure</article-title>
.
<source>Neuron</source>
<volume>76</volume>
,
<fpage>486</fpage>
<lpage>502</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">23141061</pub-id>
</mixed-citation>
</ref>
<ref id="b11">
<mixed-citation publication-type="journal">
<name>
<surname>Hodges</surname>
<given-names>D. A.</given-names>
</name>
,
<name>
<surname>Hairston</surname>
<given-names>W. D.</given-names>
</name>
&
<name>
<surname>Burdette</surname>
<given-names>J. H.</given-names>
</name>
<article-title>Aspects of multisensory perception: the integration of visual and auditory information in musical experiences</article-title>
.
<source>Ann. N. Y. Acad. Sci.</source>
<volume>1060</volume>
,
<fpage>175</fpage>
<lpage>185</lpage>
(
<year>2005</year>
).
<pub-id pub-id-type="pmid">16597762</pub-id>
</mixed-citation>
</ref>
<ref id="b12">
<mixed-citation publication-type="journal">
<name>
<surname>Carey</surname>
<given-names>D.</given-names>
</name>
<etal></etal>
.
<article-title>Generality and specificity in the effects of musical expertise on perception and cognition</article-title>
.
<source>Cognition</source>
<volume>137</volume>
,
<fpage>81</fpage>
<lpage>105</lpage>
(
<year>2015</year>
).
<pub-id pub-id-type="pmid">25618010</pub-id>
</mixed-citation>
</ref>
<ref id="b13">
<mixed-citation publication-type="journal">
<name>
<surname>Kraus</surname>
<given-names>N.</given-names>
</name>
&
<name>
<surname>Chandrasekaran</surname>
<given-names>B.</given-names>
</name>
<article-title>Music training for the development of auditory skills</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>11</volume>
,
<fpage>599</fpage>
<lpage>605</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20648064</pub-id>
</mixed-citation>
</ref>
<ref id="b14">
<mixed-citation publication-type="other">
<name>
<surname>Lim</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Sinnett</surname>
<given-names>S.</given-names>
</name>
Exploring Visual Attention in Musicians: Temporal, Spatial and Capacity Considerations. In Carlson, L., Hölscher, C., & Shipley, T.
<italic>Proceedings of the 33rd Annual Conference of the Cognitive Science Society</italic>
(pp. 580–585). Austin, TX: Cognitive Science Society (2011).</mixed-citation>
</ref>
<ref id="b15">
<mixed-citation publication-type="journal">
<name>
<surname>Moreno</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Bidelman</surname>
<given-names>G. M.</given-names>
</name>
<article-title>Examining neural plasticity and cognitive benefit through the unique lens of musical training</article-title>
.
<source>Hear. Res.</source>
<volume>308</volume>
,
<fpage>84</fpage>
<lpage>97</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24079993</pub-id>
</mixed-citation>
</ref>
<ref id="b16">
<mixed-citation publication-type="journal">
<name>
<surname>Rauscher</surname>
<given-names>F. H.</given-names>
</name>
,
<name>
<surname>Shaw</surname>
<given-names>G. L.</given-names>
</name>
&
<name>
<surname>Ky</surname>
<given-names>K. N.</given-names>
</name>
<article-title>Music and spatial task performance</article-title>
.
<source>Nature</source>
<volume>365</volume>
(6447),
<fpage>611</fpage>
(
<year>1993</year>
).
<pub-id pub-id-type="pmid">8413624</pub-id>
</mixed-citation>
</ref>
<ref id="b17">
<mixed-citation publication-type="journal">
<name>
<surname>Schellenberg</surname>
<given-names>E. G.</given-names>
</name>
<article-title>Cognitive Performance after listening to music: A review of the Mozart Effect</article-title>
. In
<name>
<surname>MacDonald</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Kreuz</surname>
<given-names>G.</given-names>
</name>
&
<name>
<surname>Mitchell</surname>
<given-names>L.</given-names>
</name>
<source>Music, health, and wellbeing</source>
<fpage>324</fpage>
<lpage>338</lpage>
. Oxford: Oxford University Press (
<year>2012</year>
).</mixed-citation>
</ref>
<ref id="b18">
<mixed-citation publication-type="journal">
<name>
<surname>Aagten-Murphy</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Cappagli</surname>
<given-names>G.</given-names>
</name>
&
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
<article-title>Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals</article-title>
.
<source>Acta Psychol. (Amst.)</source>
<volume>147</volume>
,
<fpage>25</fpage>
<lpage>33</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24184174</pub-id>
</mixed-citation>
</ref>
<ref id="b19">
<mixed-citation publication-type="journal">
<name>
<surname>Bishop</surname>
<given-names>L.</given-names>
</name>
&
<name>
<surname>Goebl</surname>
<given-names>W.</given-names>
</name>
<article-title>Context-specific effects of musical expertise on audiovisual integration</article-title>
.
<source>Front. Psychol.</source>
<volume>5</volume>
,
<fpage>1123</fpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">25324819</pub-id>
</mixed-citation>
</ref>
<ref id="b20">
<mixed-citation publication-type="journal">
<name>
<surname>Helmbold</surname>
<given-names>N.</given-names>
</name>
,
<name>
<surname>Rammsayer</surname>
<given-names>T.</given-names>
</name>
&
<name>
<surname>Altenmüller</surname>
<given-names>E.</given-names>
</name>
<article-title>Differences in primary mental abilities between musicians and nonmusicians</article-title>
.
<source>J. Individ. Differ.</source>
<volume>26</volume>
,
<fpage>74</fpage>
<lpage>85</lpage>
(
<year>2005</year>
).</mixed-citation>
</ref>
<ref id="b21">
<mixed-citation publication-type="journal">
<name>
<surname>Patston</surname>
<given-names>L. L.</given-names>
</name>
,
<name>
<surname>Hogg</surname>
<given-names>S. L.</given-names>
</name>
&
<name>
<surname>Tippett</surname>
<given-names>L. J.</given-names>
</name>
<article-title>Attention in musicians is more bilateral than in non-musicians</article-title>
.
<source>Laterality</source>
<volume>12</volume>
,
<fpage>262</fpage>
<lpage>272</lpage>
(
<year>2007</year>
).
<pub-id pub-id-type="pmid">17454575</pub-id>
</mixed-citation>
</ref>
<ref id="b22">
<mixed-citation publication-type="journal">
<name>
<surname>Moreno</surname>
<given-names>S.</given-names>
</name>
<etal></etal>
.
<article-title>Musical training influences linguistic abilities in 8-year-old children: more evidence for brain plasticity</article-title>
.
<source>Cereb. Cortex</source>
<volume>19</volume>
,
<fpage>712</fpage>
<lpage>723</lpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">18832336</pub-id>
</mixed-citation>
</ref>
<ref id="b23">
<mixed-citation publication-type="journal">
<name>
<surname>Strait</surname>
<given-names>D. L.</given-names>
</name>
,
<name>
<surname>Kraus</surname>
<given-names>N.</given-names>
</name>
,
<name>
<surname>Parbery-Clark</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Ashley</surname>
<given-names>R.</given-names>
</name>
<article-title>Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance</article-title>
.
<source>Hear. Res.</source>
<volume>261</volume>
,
<fpage>22</fpage>
<lpage>29</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20018234</pub-id>
</mixed-citation>
</ref>
<ref id="b24">
<mixed-citation publication-type="journal">
<name>
<surname>Tierney</surname>
<given-names>A. T.</given-names>
</name>
,
<name>
<surname>Bergeson-Dana</surname>
<given-names>T. R.</given-names>
</name>
&
<name>
<surname>Pisoni</surname>
<given-names>D. B.</given-names>
</name>
<article-title>Effects of early musical experience on auditory sequence memory</article-title>
.
<source>Empir. Musicol. Rev. EMR</source>
<volume>3</volume>
,
<fpage>178</fpage>
(
<year>2008</year>
).
<pub-id pub-id-type="pmid">21394231</pub-id>
</mixed-citation>
</ref>
<ref id="b25">
<mixed-citation publication-type="journal">
<name>
<surname>Lee</surname>
<given-names>H.</given-names>
</name>
&
<name>
<surname>Noppeney</surname>
<given-names>U.</given-names>
</name>
<article-title>Long-term music training tunes how the brain temporally binds signals from multiple senses</article-title>
.
<source>Proc. Natl. Acad. Sci.</source>
<volume>108</volume>
,
<fpage>E1441</fpage>
<lpage>E1450</lpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">22114191</pub-id>
</mixed-citation>
</ref>
<ref id="b26">
<mixed-citation publication-type="journal">
<name>
<surname>Friston</surname>
<given-names>K.</given-names>
</name>
&
<name>
<surname>Kiebel</surname>
<given-names>S.</given-names>
</name>
<article-title>Cortical circuits for perceptual inference</article-title>
.
<source>Neural Netw.</source>
<volume>22</volume>
,
<fpage>1093</fpage>
<lpage>1104</lpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">19635656</pub-id>
</mixed-citation>
</ref>
<ref id="b27">
<mixed-citation publication-type="journal">
<name>
<surname>Lee</surname>
<given-names>H.</given-names>
</name>
&
<name>
<surname>Noppeney</surname>
<given-names>U.</given-names>
</name>
<article-title>Temporal prediction errors in visual and auditory cortices</article-title>
.
<source>Curr. Biol.</source>
<volume>24</volume>
,
<fpage>R309</fpage>
<lpage>R310</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24735850</pub-id>
</mixed-citation>
</ref>
<ref id="b28">
<mixed-citation publication-type="journal">
<name>
<surname>DenOuden</surname>
<given-names>H. E. M. D.</given-names>
</name>
,
<name>
<surname>Friston</surname>
<given-names>K. J.</given-names>
</name>
,
<name>
<surname>Daw</surname>
<given-names>N. D.</given-names>
</name>
,
<name>
<surname>McIntosh</surname>
<given-names>A. R.</given-names>
</name>
&
<name>
<surname>Stephan</surname>
<given-names>K. E.</given-names>
</name>
<article-title>A dual role for prediction error in associative learning</article-title>
.
<source>Cereb. Cortex</source>
<volume>19</volume>
,
<fpage>1175</fpage>
<lpage>1185</lpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">18820290</pub-id>
</mixed-citation>
</ref>
<ref id="b29">
<mixed-citation publication-type="journal">
<name>
<surname>Vassena</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Krebs</surname>
<given-names>R. M.</given-names>
</name>
,
<name>
<surname>Silvetti</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Fias</surname>
<given-names>W.</given-names>
</name>
&
<name>
<surname>Verguts</surname>
<given-names>T.</given-names>
</name>
<article-title>Dissociating contributions of ACC and vmPFC in reward prediction, outcome, and choice</article-title>
.
<source>Neuropsychologia</source>
<volume>59</volume>
,
<fpage>112</fpage>
<lpage>123</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24813149</pub-id>
</mixed-citation>
</ref>
<ref id="b30">
<mixed-citation publication-type="journal">
<name>
<surname>Clark</surname>
<given-names>A.</given-names>
</name>
<article-title>Whatever next? Predictive brains, situated agents, and the future of cognitive science</article-title>
.
<source>Behav. Brain Sci.</source>
<volume>36</volume>
,
<fpage>181</fpage>
<lpage>204</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23663408</pub-id>
</mixed-citation>
</ref>
<ref id="b31">
<mixed-citation publication-type="journal">
<name>
<surname>Summerfield</surname>
<given-names>C.</given-names>
</name>
&
<name>
<surname>Egner</surname>
<given-names>T.</given-names>
</name>
<article-title>Expectation (and attention) in visual cognition</article-title>
.
<source>Trends Cogn. Sci.</source>
<volume>13</volume>
,
<fpage>403</fpage>
<lpage>409</lpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">19716752</pub-id>
</mixed-citation>
</ref>
<ref id="b32">
<mixed-citation publication-type="journal">
<name>
<surname>Vuust</surname>
<given-names>P.</given-names>
</name>
&
<name>
<surname>Witek</surname>
<given-names>M. A. G.</given-names>
</name>
<article-title>Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music</article-title>
.
<source>Audit. Cogn. Neurosci.</source>
<volume>5</volume>
,
<fpage>1111</fpage>
(
<year>2014</year>
).</mixed-citation>
</ref>
<ref id="b33">
<mixed-citation publication-type="journal">
<name>
<surname>Maes</surname>
<given-names>P.-J.</given-names>
</name>
<article-title>Sensorimotor Grounding of Musical Embodiment and the Role of Prediction: A Review</article-title>
.
<source>Front. Psychol.</source>
<volume>308</volume>
doi:
<pub-id pub-id-type="doi">10.3389/fpsyg.2016.00308</pub-id>
(
<year>2016</year>
).</mixed-citation>
</ref>
<ref id="b34">
<mixed-citation publication-type="journal">
<name>
<surname>Schaefer</surname>
<given-names>R. S.</given-names>
</name>
,
<name>
<surname>Overy</surname>
<given-names>K.</given-names>
</name>
&
<name>
<surname>Nelson</surname>
<given-names>P.</given-names>
</name>
<article-title>Affect and non-uniform characteristics of predictive processing in musical behaviour</article-title>
.
<source>Behav. Brain Sci.</source>
<volume>36</volume>
,
<fpage>226</fpage>
<lpage>227</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23663552</pub-id>
</mixed-citation>
</ref>
<ref id="b35">
<mixed-citation publication-type="journal">
<name>
<surname>Kuchenbuch</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Paraskevopoulos</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Herholz</surname>
<given-names>S. C.</given-names>
</name>
&
<name>
<surname>Pantev</surname>
<given-names>C.</given-names>
</name>
<article-title>Effects of musical training and event probabilities on encoding of complex tone patterns</article-title>
.
<source>BMC Neurosci.</source>
<volume>14</volume>
,
<fpage>51</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23617597</pub-id>
</mixed-citation>
</ref>
<ref id="b36">
<mixed-citation publication-type="journal">
<name>
<surname>Oechslin</surname>
<given-names>M. S.</given-names>
</name>
,
<name>
<surname>Van De Ville</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Lazeyras</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>Hauert</surname>
<given-names>C.-A.</given-names>
</name>
&
<name>
<surname>James</surname>
<given-names>C. E.</given-names>
</name>
<article-title>Degree of Musical Expertise Modulates Higher Order Brain Functioning</article-title>
.
<source>Cereb. Cortex</source>
<volume>23</volume>
,
<fpage>2213</fpage>
<lpage>2224</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">22832388</pub-id>
</mixed-citation>
</ref>
<ref id="b37">
<mixed-citation publication-type="journal">
<name>
<surname>Vuust</surname>
<given-names>P.</given-names>
</name>
,
<name>
<surname>Ostergaard</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Pallesen</surname>
<given-names>K. J.</given-names>
</name>
,
<name>
<surname>Bailey</surname>
<given-names>C.</given-names>
</name>
&
<name>
<surname>Roepstorff</surname>
<given-names>A.</given-names>
</name>
<article-title>Predictive coding of music – Brain responses to rhythmic incongruity</article-title>
.
<source>Cortex</source>
<volume>45</volume>
,
<fpage>80</fpage>
<lpage>92</lpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">19054506</pub-id>
</mixed-citation>
</ref>
<ref id="b38">
<mixed-citation publication-type="journal">
<name>
<surname>Maidhof</surname>
<given-names>C.</given-names>
</name>
,
<name>
<surname>Rieger</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Prinz</surname>
<given-names>W.</given-names>
</name>
&
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<article-title>Nobody Is Perfect: ERP Effects Prior to Performance Errors in Musicians Indicate Fast Monitoring Processes</article-title>
.
<source>PLOS ONE</source>
<volume>4</volume>
,
<fpage>e5032</fpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">19337379</pub-id>
</mixed-citation>
</ref>
<ref id="b39">
<mixed-citation publication-type="journal">
<name>
<surname>Wilcox</surname>
<given-names>R. R.</given-names>
</name>
&
<name>
<surname>Keselman</surname>
<given-names>H. J.</given-names>
</name>
<article-title>Modern robust data analysis methods: measures of central tendency</article-title>
.
<source>Psychol. Methods</source>
<volume>8</volume>
,
<fpage>254</fpage>
<lpage>274</lpage>
(
<year>2003</year>
).
<pub-id pub-id-type="pmid">14596490</pub-id>
</mixed-citation>
</ref>
<ref id="b40">
<mixed-citation publication-type="journal">
<name>
<surname>Fischer</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Dreisbach</surname>
<given-names>G.</given-names>
</name>
&
<name>
<surname>Goschke</surname>
<given-names>T.</given-names>
</name>
<article-title>Context-sensitive adjustments of cognitive control: conflict-adaptation effects are modulated by processing demands of the ongoing task</article-title>
.
<source>J. Exp. Psychol. Learn. Mem. Cogn.</source>
<volume>34</volume>
,
<fpage>712</fpage>
<lpage>718</lpage>
(
<year>2008</year>
).
<pub-id pub-id-type="pmid">18444768</pub-id>
</mixed-citation>
</ref>
<ref id="b41">
<mixed-citation publication-type="journal">
<name>
<surname>Schmidhuber</surname>
<given-names>J.</given-names>
</name>
In
<source>Anticipatory Behavior in Adaptive Learning Systems</source>
(eds.
<name>
<surname>Pezzulo</surname>
<given-names>G.</given-names>
</name>
,
<name>
<surname>Butz</surname>
<given-names>M. V.</given-names>
</name>
,
<name>
<surname>Sigaud</surname>
<given-names>O.</given-names>
</name>
&
<name>
<surname>Baldassarre</surname>
<given-names>G.</given-names>
</name>
)
<fpage>48</fpage>
<lpage>76</lpage>
(Springer Berlin Heidelberg,
<year>2009</year>
).</mixed-citation>
</ref>
<ref id="b42">
<mixed-citation publication-type="journal">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<article-title>The OPERA hypothesis: assumptions and clarifications</article-title>
.
<source>Ann. N. Y. Acad. Sci.</source>
<volume>1252</volume>
,
<fpage>124</fpage>
<lpage>128</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22524349</pub-id>
</mixed-citation>
</ref>
<ref id="b43">
<mixed-citation publication-type="journal">
<name>
<surname>Patel</surname>
<given-names>A. D.</given-names>
</name>
<article-title>Why would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis</article-title>
.
<source>Front. Psychol.</source>
<volume>2</volume>
,
<fpage>142</fpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21747773</pub-id>
</mixed-citation>
</ref>
<ref id="b44">
<mixed-citation publication-type="journal">
<name>
<surname>Cameron</surname>
<given-names>D. J.</given-names>
</name>
&
<name>
<surname>Grahn</surname>
<given-names>J. A.</given-names>
</name>
<article-title>Neuroscientific investigations of musical rhytm</article-title>
.
<source>Acoust. Aust.</source>
<volume>42</volume>
,
<fpage>111</fpage>
(
<year>2014</year>
).</mixed-citation>
</ref>
<ref id="b45">
<mixed-citation publication-type="journal">
<name>
<surname>O’Doherty</surname>
<given-names>J. P.</given-names>
</name>
,
<name>
<surname>Dayan</surname>
<given-names>P.</given-names>
</name>
,
<name>
<surname>Friston</surname>
<given-names>K.</given-names>
</name>
,
<name>
<surname>Critchley</surname>
<given-names>H.</given-names>
</name>
&
<name>
<surname>Dolan</surname>
<given-names>R. J.</given-names>
</name>
<article-title>Temporal difference models and reward-related learning in the human brain</article-title>
.
<source>Neuron</source>
<volume>38</volume>
,
<fpage>329</fpage>
<lpage>337</lpage>
(
<year>2003</year>
).
<pub-id pub-id-type="pmid">12718865</pub-id>
</mixed-citation>
</ref>
<ref id="b46">
<mixed-citation publication-type="journal">
<name>
<surname>Wacongne</surname>
<given-names>C.</given-names>
</name>
,
<name>
<surname>Changeux</surname>
<given-names>J.-P.</given-names>
</name>
&
<name>
<surname>Dehaene</surname>
<given-names>S.</given-names>
</name>
<article-title>A neuronal model of predictive coding accounting for the mismatch negativity</article-title>
.
<source>J. Neurosci.</source>
<volume>32</volume>
,
<fpage>3665</fpage>
<lpage>3678</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22423089</pub-id>
</mixed-citation>
</ref>
<ref id="b47">
<mixed-citation publication-type="journal">
<name>
<surname>Daneman</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Carpenter</surname>
<given-names>P. A.</given-names>
</name>
<article-title>Individual differences in working memory and reading</article-title>
.
<source>J. Verbal Learn. Verbal Behav.</source>
<volume>19</volume>
,
<fpage>450</fpage>
<lpage>466</lpage>
(
<year>1980</year>
).</mixed-citation>
</ref>
<ref id="b48">
<mixed-citation publication-type="journal">
<name>
<surname>Corsi</surname>
<given-names>P.</given-names>
</name>
<source>Human memory and the medial temporal region of the brain.</source>
(McGill University,
<year>1972</year>
).</mixed-citation>
</ref>
</ref-list>
<fn-group>
<fn>
<p>
<bold>Author Contributions</bold>
All authors developed the study concept and contributed to the experimental design. E.V., K.K. and T.V. drafted the manuscript. J.L. conducted data collection. E.V. and J.L. analyzed the data. All authors approved the final version of the manuscript for submission.</p>
</fn>
</fn-group>
</back>
<floats-group>
<fig id="f1">
<label>Figure 1</label>
<caption>
<title>Group × compatibility frequency × compatibility interaction.</title>
<p>(
<bold>a</bold>
) Average RTs for musicians in compatible (C) and incompatible (IC) trials, as a function of compatibility frequency in the block (50/50, 80/20). The error bars represent one standard error of the mean. (
<bold>b</bold>
) Average RTs for controls in compatible (C) and incompatible (IC) trials, as a function of compatibility frequency in the block (50/50, 80/20). The error bars represent one standard error of the mean.</p>
</caption>
<graphic xlink:href="srep25225-f1"></graphic>
</fig>
<fig id="f2">
<label>Figure 2</label>
<caption>
<title>Task structure.</title>
<p>(
<bold>a</bold>
) Task timing with an example of the two unimodal trial types: auditory-auditory (AA) with auditory cue and auditory target (tones); visual-visual (VV) with a visual cue (arrow) and visual target (X). (
<bold>b</bold>
) Example of the two cross-modal trial types: auditory-visual (AV) with auditory cue (tone) and visual target (X); visual-auditory (VA) with visual cue (arrow) and auditory target (tone). (
<bold>c</bold>
) Cue-target combinations and compatibility for AA and AV trials. From left to right: Auditory cues (low tone 800 hz, high tone 1600 hz); compatible (C) and incompatible (IC) auditory targets (AA trial); compatible (C) and incompatible (C) visual targets (AV trial). (
<bold>d</bold>
) Cue-target combinations and compatibility for VV and VA trials. From left to right: Visual cues (left or right pointing arrow); compatible (C) and incompatible (C) visual targets (VV trial); compatible (C) and incompatible (C) auditory targets (VA trial).</p>
</caption>
<graphic xlink:href="srep25225-f2"></graphic>
</fig>
</floats-group>
</pmc>
<affiliations>
<list>
<country>
<li>Belgique</li>
</country>
</list>
<tree>
<country name="Belgique">
<noRegion>
<name sortKey="Vassena, Eliana" sort="Vassena, Eliana" uniqKey="Vassena E" first="Eliana" last="Vassena">Eliana Vassena</name>
</noRegion>
<name sortKey="Kochman, Katty" sort="Kochman, Katty" uniqKey="Kochman K" first="Katty" last="Kochman">Katty Kochman</name>
<name sortKey="Latomme, Julie" sort="Latomme, Julie" uniqKey="Latomme J" first="Julie" last="Latomme">Julie Latomme</name>
<name sortKey="Verguts, Tom" sort="Verguts, Tom" uniqKey="Verguts T" first="Tom" last="Verguts">Tom Verguts</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000010 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000010 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4855230
   |texte=   Unimodal and cross-modal prediction is enhanced in musicians
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:27142627" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024