Serveur d'exploration sur la musique en Sarre

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Pleasurable music affects reinforcement learning according to the listener

Identifieur interne : 000145 ( Pmc/Corpus ); précédent : 000144; suivant : 000146

Pleasurable music affects reinforcement learning according to the listener

Auteurs : Benjamin P. Gold ; Michael J. Frank ; Brigitte Bogert ; Elvira Brattico

Source :

RBID : PMC:3748532

Abstract

Mounting evidence links the enjoyment of music to brain areas implicated in emotion and the dopaminergic reward system. In particular, dopamine release in the ventral striatum seems to play a major role in the rewarding aspect of music listening. Striatal dopamine also influences reinforcement learning, such that subjects with greater dopamine efficacy learn better to approach rewards while those with lesser dopamine efficacy learn better to avoid punishments. In this study, we explored the practical implications of musical pleasure through its ability to facilitate reinforcement learning via non-pharmacological dopamine elicitation. Subjects from a wide variety of musical backgrounds chose a pleasurable and a neutral piece of music from an experimenter-compiled database, and then listened to one or both of these pieces (according to pseudo-random group assignment) as they performed a reinforcement learning task dependent on dopamine transmission. We assessed musical backgrounds as well as typical listening patterns with the new Helsinki Inventory of Music and Affective Behaviors (HIMAB), and separately investigated behavior for the training and test phases of the learning task. Subjects with more musical experience trained better with neutral music and tested better with pleasurable music, while those with less musical experience exhibited the opposite effect. HIMAB results regarding listening behaviors and subjective music ratings indicate that these effects arose from different listening styles: namely, more affective listening in non-musicians and more analytical listening in musicians. In conclusion, musical pleasure was able to influence task performance, and the shape of this effect depended on group and individual factors. These findings have implications in affective neuroscience, neuroaesthetics, learning, and music therapy.


Url:
DOI: 10.3389/fpsyg.2013.00541
PubMed: 23970875
PubMed Central: 3748532

Links to Exploration step

PMC:3748532

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Pleasurable music affects reinforcement learning according to the listener</title>
<author>
<name sortKey="Gold, Benjamin P" sort="Gold, Benjamin P" uniqKey="Gold B" first="Benjamin P." last="Gold">Benjamin P. Gold</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Department of Music, Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Frank, Michael J" sort="Frank, Michael J" uniqKey="Frank M" first="Michael J." last="Frank">Michael J. Frank</name>
<affiliation>
<nlm:aff id="aff3">
<institution>Department of Cognitive, Linguistic and Psychological Sciences, Brown University</institution>
<country>Providence, RI, USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bogert, Brigitte" sort="Bogert, Brigitte" uniqKey="Bogert B" first="Brigitte" last="Bogert">Brigitte Bogert</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Department of Music, Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Brattico, Elvira" sort="Brattico, Elvira" uniqKey="Brattico E" first="Elvira" last="Brattico">Elvira Brattico</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Department of Music, Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff4">
<institution>Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University</institution>
<country>Espoo, Finland</country>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">23970875</idno>
<idno type="pmc">3748532</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3748532</idno>
<idno type="RBID">PMC:3748532</idno>
<idno type="doi">10.3389/fpsyg.2013.00541</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">000145</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000145</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Pleasurable music affects reinforcement learning according to the listener</title>
<author>
<name sortKey="Gold, Benjamin P" sort="Gold, Benjamin P" uniqKey="Gold B" first="Benjamin P." last="Gold">Benjamin P. Gold</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Department of Music, Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Frank, Michael J" sort="Frank, Michael J" uniqKey="Frank M" first="Michael J." last="Frank">Michael J. Frank</name>
<affiliation>
<nlm:aff id="aff3">
<institution>Department of Cognitive, Linguistic and Psychological Sciences, Brown University</institution>
<country>Providence, RI, USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bogert, Brigitte" sort="Bogert, Brigitte" uniqKey="Bogert B" first="Brigitte" last="Bogert">Brigitte Bogert</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Department of Music, Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Brattico, Elvira" sort="Brattico, Elvira" uniqKey="Brattico E" first="Elvira" last="Brattico">Elvira Brattico</name>
<affiliation>
<nlm:aff id="aff1">
<institution>Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<institution>Department of Music, Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff4">
<institution>Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University</institution>
<country>Espoo, Finland</country>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Mounting evidence links the enjoyment of music to brain areas implicated in emotion and the dopaminergic reward system. In particular, dopamine release in the ventral striatum seems to play a major role in the rewarding aspect of music listening. Striatal dopamine also influences reinforcement learning, such that subjects with greater dopamine efficacy learn better to approach rewards while those with lesser dopamine efficacy learn better to avoid punishments. In this study, we explored the practical implications of musical pleasure through its ability to facilitate reinforcement learning via non-pharmacological dopamine elicitation. Subjects from a wide variety of musical backgrounds chose a pleasurable and a neutral piece of music from an experimenter-compiled database, and then listened to one or both of these pieces (according to pseudo-random group assignment) as they performed a reinforcement learning task dependent on dopamine transmission. We assessed musical backgrounds as well as typical listening patterns with the new Helsinki Inventory of Music and Affective Behaviors (HIMAB), and separately investigated behavior for the training and test phases of the learning task. Subjects with more musical experience trained better with neutral music and tested better with pleasurable music, while those with less musical experience exhibited the opposite effect. HIMAB results regarding listening behaviors and subjective music ratings indicate that these effects arose from different listening styles: namely, more affective listening in non-musicians and more analytical listening in musicians. In conclusion, musical pleasure was able to influence task performance, and the shape of this effect depended on group and individual factors. These findings have implications in affective neuroscience, neuroaesthetics, learning, and music therapy.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Ashby, F G" uniqKey="Ashby F">F. G. Ashby</name>
</author>
<author>
<name sortKey="Isen, A M" uniqKey="Isen A">A. M. Isen</name>
</author>
<author>
<name sortKey="Turken, A U" uniqKey="Turken A">A. U. Turken</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Badre, D" uniqKey="Badre D">D. Badre</name>
</author>
<author>
<name sortKey="Frank, M J" uniqKey="Frank M">M. J. Frank</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bengtsson, S" uniqKey="Bengtsson S">S. Bengtsson</name>
</author>
<author>
<name sortKey="Nagy, Z" uniqKey="Nagy Z">Z. Nagy</name>
</author>
<author>
<name sortKey="Skare, S" uniqKey="Skare S">S. Skare</name>
</author>
<author>
<name sortKey="Forsman, L" uniqKey="Forsman L">L. Forsman</name>
</author>
<author>
<name sortKey="Forssberg, H" uniqKey="Forssberg H">H. Forssberg</name>
</author>
<author>
<name sortKey="Ullen, F" uniqKey="Ullen F">F. Ullén</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Berridge, K C" uniqKey="Berridge K">K. C. Berridge</name>
</author>
<author>
<name sortKey="Kringelbach, M L" uniqKey="Kringelbach M">M. L. Kringelbach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bharucha, J J" uniqKey="Bharucha J">J. J. Bharucha</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E. Bigand</name>
</author>
<author>
<name sortKey="Poulin Charronnat, B" uniqKey="Poulin Charronnat B">B. Poulin-Charronnat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blood, A J" uniqKey="Blood A">A. J. Blood</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blood, A J" uniqKey="Blood A">A. J. Blood</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Bermudez, P" uniqKey="Bermudez P">P. Bermudez</name>
</author>
<author>
<name sortKey="Evans, A C" uniqKey="Evans A">A. C. Evans</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E. Brattico</name>
</author>
<author>
<name sortKey="Pearce, M" uniqKey="Pearce M">M. Pearce</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brown, S" uniqKey="Brown S">S. Brown</name>
</author>
<author>
<name sortKey="Gao, X" uniqKey="Gao X">X. Gao</name>
</author>
<author>
<name sortKey="Tisdelle, L" uniqKey="Tisdelle L">L. Tisdelle</name>
</author>
<author>
<name sortKey="Eickhoff, S B" uniqKey="Eickhoff S">S. B. Eickhoff</name>
</author>
<author>
<name sortKey="Liotti, M" uniqKey="Liotti M">M. Liotti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Caldu, X" uniqKey="Caldu X">X. Caldú</name>
</author>
<author>
<name sortKey="Vendrell, P" uniqKey="Vendrell P">P. Vendrell</name>
</author>
<author>
<name sortKey="Bartres Faz, D" uniqKey="Bartres Faz D">D. Bartrés-Faz</name>
</author>
<author>
<name sortKey="Clemente, I" uniqKey="Clemente I">I. Clemente</name>
</author>
<author>
<name sortKey="Bargall, N" uniqKey="Bargall N">N. Bargalló</name>
</author>
<author>
<name sortKey="Jurado, M A" uniqKey="Jurado M">M. A. Jurado</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Caplin, A" uniqKey="Caplin A">A. Caplin</name>
</author>
<author>
<name sortKey="Dean, M" uniqKey="Dean M">M. Dean</name>
</author>
<author>
<name sortKey="Glimcher, P W" uniqKey="Glimcher P">P. W. Glimcher</name>
</author>
<author>
<name sortKey="Rutledge, R B" uniqKey="Rutledge R">R. B. Rutledge</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carpenter, S M" uniqKey="Carpenter S">S. M. Carpenter</name>
</author>
<author>
<name sortKey="Peters, E" uniqKey="Peters E">E. Peters</name>
</author>
<author>
<name sortKey="V Stfj Ll, D" uniqKey="V Stfj Ll D">D. Västfjäll</name>
</author>
<author>
<name sortKey="Isen, A M" uniqKey="Isen A">A. M. Isen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chamorro Premuzic, T" uniqKey="Chamorro Premuzic T">T. Chamorro-Premuzic</name>
</author>
<author>
<name sortKey="Furnham, A" uniqKey="Furnham A">A. Furnham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chamorro Premuzic, T" uniqKey="Chamorro Premuzic T">T. Chamorro-Premuzic</name>
</author>
<author>
<name sortKey="Swami, V" uniqKey="Swami V">V. Swami</name>
</author>
<author>
<name sortKey="Cermakova, B" uniqKey="Cermakova B">B. Cermakova</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chapin, H" uniqKey="Chapin H">H. Chapin</name>
</author>
<author>
<name sortKey="Jantzen, K" uniqKey="Jantzen K">K. Jantzen</name>
</author>
<author>
<name sortKey="Kelso, J A" uniqKey="Kelso J">J. A. Kelso</name>
</author>
<author>
<name sortKey="Steinberg, F" uniqKey="Steinberg F">F. Steinberg</name>
</author>
<author>
<name sortKey="Large, E" uniqKey="Large E">E. Large</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chase, H W" uniqKey="Chase H">H. W. Chase</name>
</author>
<author>
<name sortKey="Frank, M J" uniqKey="Frank M">M. J. Frank</name>
</author>
<author>
<name sortKey="Michael, A" uniqKey="Michael A">A. Michael</name>
</author>
<author>
<name sortKey="Bullmore, E T" uniqKey="Bullmore E">E. T. Bullmore</name>
</author>
<author>
<name sortKey="Sahakian, B J" uniqKey="Sahakian B">B. J. Sahakian</name>
</author>
<author>
<name sortKey="Robbins, T W" uniqKey="Robbins T">T. W. Robbins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, J L" uniqKey="Chen J">J. L. Chen</name>
</author>
<author>
<name sortKey="Penhune, V B" uniqKey="Penhune V">V. B. Penhune</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Collins, A G E" uniqKey="Collins A">A. G. E. Collins</name>
</author>
<author>
<name sortKey="Frank, M J" uniqKey="Frank M">M. J. Frank</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="D Ardenne, K" uniqKey="D Ardenne K">K. D'Ardenne</name>
</author>
<author>
<name sortKey="Mcclure, S M" uniqKey="Mcclure S">S. M. McClure</name>
</author>
<author>
<name sortKey="Nystrom, L E" uniqKey="Nystrom L">L. E. Nystrom</name>
</author>
<author>
<name sortKey="Cohen, J D" uniqKey="Cohen J">J. D. Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Daw, N D" uniqKey="Daw N">N. D. Daw</name>
</author>
<author>
<name sortKey="Doya, K" uniqKey="Doya K">K. Doya</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dellacherie, D" uniqKey="Dellacherie D">D. Dellacherie</name>
</author>
<author>
<name sortKey="Roy, M" uniqKey="Roy M">M. Roy</name>
</author>
<author>
<name sortKey="Hugueville, L" uniqKey="Hugueville L">L. Hugueville</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
<author>
<name sortKey="Samson, S" uniqKey="Samson S">S. Samson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Doll, B B" uniqKey="Doll B">B. B. Doll</name>
</author>
<author>
<name sortKey="Hutchison, K E" uniqKey="Hutchison K">K. E. Hutchison</name>
</author>
<author>
<name sortKey="Frank, M J" uniqKey="Frank M">M. J. Frank</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dube, L" uniqKey="Dube L">L. Dubé</name>
</author>
<author>
<name sortKey="Le Bel, J" uniqKey="Le Bel J">J. Le Bel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eerola, T" uniqKey="Eerola T">T. Eerola</name>
</author>
<author>
<name sortKey="Vuoskoski, J K" uniqKey="Vuoskoski J">J. K. Vuoskoski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fabry, G" uniqKey="Fabry G">G. Fabry</name>
</author>
<author>
<name sortKey="Giesler, M" uniqKey="Giesler M">M. Giesler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frank, M J" uniqKey="Frank M">M. J. Frank</name>
</author>
<author>
<name sortKey="Doll, B B" uniqKey="Doll B">B. B. Doll</name>
</author>
<author>
<name sortKey="Oas Terpstra, J" uniqKey="Oas Terpstra J">J. Oas-Terpstra</name>
</author>
<author>
<name sortKey="Moreno, F" uniqKey="Moreno F">F. Moreno</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frank, M J" uniqKey="Frank M">M. J. Frank</name>
</author>
<author>
<name sortKey="Moustafa, A A" uniqKey="Moustafa A">A. A. Moustafa</name>
</author>
<author>
<name sortKey="Haughey, H" uniqKey="Haughey H">H. Haughey</name>
</author>
<author>
<name sortKey="Curran, T" uniqKey="Curran T">T. Curran</name>
</author>
<author>
<name sortKey="Hutchison, K" uniqKey="Hutchison K">K. Hutchison</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frank, M J" uniqKey="Frank M">M. J. Frank</name>
</author>
<author>
<name sortKey="Samanta, J" uniqKey="Samanta J">J. Samanta</name>
</author>
<author>
<name sortKey="Moustafa, A A" uniqKey="Moustafa A">A. A. Moustafa</name>
</author>
<author>
<name sortKey="Sherman, S J" uniqKey="Sherman S">S. J. Sherman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frank, M J" uniqKey="Frank M">M. J. Frank</name>
</author>
<author>
<name sortKey="Seeberger, L" uniqKey="Seeberger L">L. Seeberger</name>
</author>
<author>
<name sortKey="O Reilly, R C" uniqKey="O Reilly R">R. C. O'Reilly</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gaser, C" uniqKey="Gaser C">C. Gaser</name>
</author>
<author>
<name sortKey="Schlaug, G" uniqKey="Schlaug G">G. Schlaug</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grob, S" uniqKey="Grob S">S. Grob</name>
</author>
<author>
<name sortKey="Pizzagalli, D A" uniqKey="Pizzagalli D">D. A. Pizzagalli</name>
</author>
<author>
<name sortKey="Dutra, S J" uniqKey="Dutra S">S. J. Dutra</name>
</author>
<author>
<name sortKey="Stern, J" uniqKey="Stern J">J. Stern</name>
</author>
<author>
<name sortKey="Morgeli, H" uniqKey="Morgeli H">H. Mörgeli</name>
</author>
<author>
<name sortKey="Milos, G" uniqKey="Milos G">G. Milos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hollerman, J R" uniqKey="Hollerman J">J. R. Hollerman</name>
</author>
<author>
<name sortKey="Schultz, W" uniqKey="Schultz W">W. Schultz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huron, D" uniqKey="Huron D">D. Huron</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hyde, K L" uniqKey="Hyde K">K. L. Hyde</name>
</author>
<author>
<name sortKey="Lerch, J" uniqKey="Lerch J">J. Lerch</name>
</author>
<author>
<name sortKey="Norton, A" uniqKey="Norton A">A. Norton</name>
</author>
<author>
<name sortKey="Forgeard, M" uniqKey="Forgeard M">M. Forgeard</name>
</author>
<author>
<name sortKey="Winner, E" uniqKey="Winner E">E. Winner</name>
</author>
<author>
<name sortKey="Evans, A C" uniqKey="Evans A">A. C. Evans</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ist K, E" uniqKey="Ist K E">E. Istók</name>
</author>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E. Brattico</name>
</author>
<author>
<name sortKey="Jacobsen, T" uniqKey="Jacobsen T">T. Jacobsen</name>
</author>
<author>
<name sortKey="Krohn, K" uniqKey="Krohn K">K. Krohn</name>
</author>
<author>
<name sortKey="Muller, M" uniqKey="Muller M">M. Müller</name>
</author>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jocham, G" uniqKey="Jocham G">G. Jocham</name>
</author>
<author>
<name sortKey="Klein, T A" uniqKey="Klein T">T. A. Klein</name>
</author>
<author>
<name sortKey="Ullsperger, M" uniqKey="Ullsperger M">M. Ullsperger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Schroger, E" uniqKey="Schroger E">E. Schröger</name>
</author>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koeneke, S" uniqKey="Koeneke S">S. Koeneke</name>
</author>
<author>
<name sortKey="Lutz, K" uniqKey="Lutz K">K. Lutz</name>
</author>
<author>
<name sortKey="Wustenberg, T" uniqKey="Wustenberg T">T. Wustenberg</name>
</author>
<author>
<name sortKey="Jancke, L" uniqKey="Jancke L">L. Jancke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuhn, S" uniqKey="Kuhn S">S. Kühn</name>
</author>
<author>
<name sortKey="Gallinat, J" uniqKey="Gallinat J">J. Gallinat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ledoux, J" uniqKey="Ledoux J">J. LeDoux</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levitin, D J" uniqKey="Levitin D">D. J. Levitin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lindquist, K A" uniqKey="Lindquist K">K. A. Lindquist</name>
</author>
<author>
<name sortKey="Wager, T D" uniqKey="Wager T">T. D. Wager</name>
</author>
<author>
<name sortKey="Kober, H" uniqKey="Kober H">H. Kober</name>
</author>
<author>
<name sortKey="Bliss Moreau, E" uniqKey="Bliss Moreau E">E. Bliss-Moreau</name>
</author>
<author>
<name sortKey="Barrett, L F" uniqKey="Barrett L">L. F. Barrett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Menon, V" uniqKey="Menon V">V. Menon</name>
</author>
<author>
<name sortKey="Levitin, D J" uniqKey="Levitin D">D. J. Levitin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meyer, L B" uniqKey="Meyer L">L. B. Meyer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Montague, P R" uniqKey="Montague P">P. R. Montague</name>
</author>
<author>
<name sortKey="Dayan, P" uniqKey="Dayan P">P. Dayan</name>
</author>
<author>
<name sortKey="Sejnowski, T J" uniqKey="Sejnowski T">T. J. Sejnowski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Muller, M" uniqKey="Muller M">M. Müller</name>
</author>
<author>
<name sortKey="Hofel, L" uniqKey="Hofel L">L. Höfel</name>
</author>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E. Brattico</name>
</author>
<author>
<name sortKey="Jacobsen, T" uniqKey="Jacobsen T">T. Jacobsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Niv, Y" uniqKey="Niv Y">Y. Niv</name>
</author>
<author>
<name sortKey="Daw, N D" uniqKey="Daw N">N. D. Daw</name>
</author>
<author>
<name sortKey="Joel, D" uniqKey="Joel D">D. Joel</name>
</author>
<author>
<name sortKey="Dayan, P" uniqKey="Dayan P">P. Dayan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="O Doherty, J" uniqKey="O Doherty J">J. O'Doherty</name>
</author>
<author>
<name sortKey="Dayan, P" uniqKey="Dayan P">P. Dayan</name>
</author>
<author>
<name sortKey="Schultz, J" uniqKey="Schultz J">J. Schultz</name>
</author>
<author>
<name sortKey="Deichmann, R" uniqKey="Deichmann R">R. Deichmann</name>
</author>
<author>
<name sortKey="Friston, K" uniqKey="Friston K">K. Friston</name>
</author>
<author>
<name sortKey="Dolan, R J" uniqKey="Dolan R">R. J. Dolan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oechslin, M S" uniqKey="Oechslin M">M. S. Oechslin</name>
</author>
<author>
<name sortKey="Van De Ville, D" uniqKey="Van De Ville D">D. Van De Ville</name>
</author>
<author>
<name sortKey="Lazeyras, F" uniqKey="Lazeyras F">F. Lazeyras</name>
</author>
<author>
<name sortKey="Hauert, C A" uniqKey="Hauert C">C. A. Hauert</name>
</author>
<author>
<name sortKey="James, C E" uniqKey="James C">C. E. James</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Overton, D A" uniqKey="Overton D">D. A. Overton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pacchetti, C" uniqKey="Pacchetti C">C. Pacchetti</name>
</author>
<author>
<name sortKey="Mancini, F" uniqKey="Mancini F">F. Mancini</name>
</author>
<author>
<name sortKey="Aglieri, R" uniqKey="Aglieri R">R. Aglieri</name>
</author>
<author>
<name sortKey="Fundar, C" uniqKey="Fundar C">C. Fundaró</name>
</author>
<author>
<name sortKey="Martignoni, E" uniqKey="Martignoni E">E. Martignoni</name>
</author>
<author>
<name sortKey="Nappi, G" uniqKey="Nappi G">G. Nappi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pereira, C S" uniqKey="Pereira C">C. S. Pereira</name>
</author>
<author>
<name sortKey="Teixeira, J" uniqKey="Teixeira J">J. Teixeira</name>
</author>
<author>
<name sortKey="Figueiredo, P" uniqKey="Figueiredo P">P. Figueiredo</name>
</author>
<author>
<name sortKey="Xavier, J" uniqKey="Xavier J">J. Xavier</name>
</author>
<author>
<name sortKey="Castro, S L" uniqKey="Castro S">S. L. Castro</name>
</author>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E. Brattico</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rauscher, F H" uniqKey="Rauscher F">F. H. Rauscher</name>
</author>
<author>
<name sortKey="Shaw, G L" uniqKey="Shaw G">G. L. Shaw</name>
</author>
<author>
<name sortKey="Ky, K N" uniqKey="Ky K">K. N. Ky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rentfrow, P J" uniqKey="Rentfrow P">P. J. Rentfrow</name>
</author>
<author>
<name sortKey="Gosling, S D" uniqKey="Gosling S">S. D. Gosling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Robinson, T E" uniqKey="Robinson T">T. E. Robinson</name>
</author>
<author>
<name sortKey="Berridge, K C" uniqKey="Berridge K">K. C. Berridge</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rodrigues, A C" uniqKey="Rodrigues A">A. C. Rodrigues</name>
</author>
<author>
<name sortKey="Loureiro, M A" uniqKey="Loureiro M">M. A. Loureiro</name>
</author>
<author>
<name sortKey="Caramelli, P" uniqKey="Caramelli P">P. Caramelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saarikallio, S" uniqKey="Saarikallio S">S. Saarikallio</name>
</author>
<author>
<name sortKey="Erkkil, J" uniqKey="Erkkil J">J. Erkkilä</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salimpoor, V N" uniqKey="Salimpoor V">V. N. Salimpoor</name>
</author>
<author>
<name sortKey="Benovoy, M" uniqKey="Benovoy M">M. Benovoy</name>
</author>
<author>
<name sortKey="Larcher, K" uniqKey="Larcher K">K. Larcher</name>
</author>
<author>
<name sortKey="Dagher, A" uniqKey="Dagher A">A. Dagher</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salimpoor, V N" uniqKey="Salimpoor V">V. N. Salimpoor</name>
</author>
<author>
<name sortKey="Benevoy, M" uniqKey="Benevoy M">M. Benevoy</name>
</author>
<author>
<name sortKey="Longo, G" uniqKey="Longo G">G. Longo</name>
</author>
<author>
<name sortKey="Cooperstock, J R" uniqKey="Cooperstock J">J. R. Cooperstock</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salimpoor, V N" uniqKey="Salimpoor V">V. N. Salimpoor</name>
</author>
<author>
<name sortKey="Van Den Bosch, I" uniqKey="Van Den Bosch I">I. van den Bosch</name>
</author>
<author>
<name sortKey="Kovacevic, N" uniqKey="Kovacevic N">N. Kovacevic</name>
</author>
<author>
<name sortKey="Mcintosh, A R" uniqKey="Mcintosh A">A. R. McIntosh</name>
</author>
<author>
<name sortKey="Dagher, A" uniqKey="Dagher A">A. Dagher</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schott, B H" uniqKey="Schott B">B. H. Schott</name>
</author>
<author>
<name sortKey="Minuzzi, L" uniqKey="Minuzzi L">L. Minuzzi</name>
</author>
<author>
<name sortKey="Krebs, R M" uniqKey="Krebs R">R. M. Krebs</name>
</author>
<author>
<name sortKey="Elmenhorst, D" uniqKey="Elmenhorst D">D. Elmenhorst</name>
</author>
<author>
<name sortKey="Lang, M" uniqKey="Lang M">M. Lang</name>
</author>
<author>
<name sortKey="Winz, O H" uniqKey="Winz O">O. H. Winz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schultz, W" uniqKey="Schultz W">W. Schultz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seger, C A" uniqKey="Seger C">C. A. Seger</name>
</author>
<author>
<name sortKey="Spiering, B J" uniqKey="Spiering B">B. J. Spiering</name>
</author>
<author>
<name sortKey="Sares, A G" uniqKey="Sares A">A. G. Sares</name>
</author>
<author>
<name sortKey="Quraini, S I" uniqKey="Quraini S">S. I. Quraini</name>
</author>
<author>
<name sortKey="Alpeter, C" uniqKey="Alpeter C">C. Alpeter</name>
</author>
<author>
<name sortKey="David, J" uniqKey="David J">J. David</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shiner, T" uniqKey="Shiner T">T. Shiner</name>
</author>
<author>
<name sortKey="Seymour, B" uniqKey="Seymour B">B. Seymour</name>
</author>
<author>
<name sortKey="Wunderlich, K" uniqKey="Wunderlich K">K. Wunderlich</name>
</author>
<author>
<name sortKey="Hill, C" uniqKey="Hill C">C. Hill</name>
</author>
<author>
<name sortKey="Bhatia, K P" uniqKey="Bhatia K">K. P. Bhatia</name>
</author>
<author>
<name sortKey="Dayan, P" uniqKey="Dayan P">P. Dayan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sloboda, J A" uniqKey="Sloboda J">J. A. Sloboda</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sloboda, J A" uniqKey="Sloboda J">J. A. Sloboda</name>
</author>
<author>
<name sortKey="Juslin, P N" uniqKey="Juslin P">P. N. Juslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sloboda, J A" uniqKey="Sloboda J">J. A. Sloboda</name>
</author>
<author>
<name sortKey="O Neill, S A" uniqKey="O Neill S">S. A. O'Neill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
<author>
<name sortKey="Castaneda, A" uniqKey="Castaneda A">A. Castaneda</name>
</author>
<author>
<name sortKey="Knoll, M" uniqKey="Knoll M">M. Knoll</name>
</author>
<author>
<name sortKey="Uther, M" uniqKey="Uther M">M. Uther</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tervaniemi, M" uniqKey="Tervaniemi M">M. Tervaniemi</name>
</author>
<author>
<name sortKey="Just, V" uniqKey="Just V">V. Just</name>
</author>
<author>
<name sortKey="Koelsch, S" uniqKey="Koelsch S">S. Koelsch</name>
</author>
<author>
<name sortKey="Widmann, A" uniqKey="Widmann A">A. Widmann</name>
</author>
<author>
<name sortKey="Schroger, E" uniqKey="Schroger E">E. Schröger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van De Cruys, S" uniqKey="Van De Cruys S">S. Van de Cruys</name>
</author>
<author>
<name sortKey="Wagemans, J" uniqKey="Wagemans J">J. Wagemans</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vuust, P" uniqKey="Vuust P">P. Vuust</name>
</author>
<author>
<name sortKey="Kringelbach, M L" uniqKey="Kringelbach M">M. L. Kringelbach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Worbe, Y" uniqKey="Worbe Y">Y. Worbe</name>
</author>
<author>
<name sortKey="Palminteri, S" uniqKey="Palminteri S">S. Palminteri</name>
</author>
<author>
<name sortKey="Hartmann, A" uniqKey="Hartmann A">A. Hartmann</name>
</author>
<author>
<name sortKey="Vidailhet, M" uniqKey="Vidailhet M">M. Vidailhet</name>
</author>
<author>
<name sortKey="Lehericy, S" uniqKey="Lehericy S">S. Lehéricy</name>
</author>
<author>
<name sortKey="Pessiglione, M" uniqKey="Pessiglione M">M. Pessiglione</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Evans, A C" uniqKey="Evans A">A. C. Evans</name>
</author>
<author>
<name sortKey="Meyer, E" uniqKey="Meyer E">E. Meyer</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">23970875</article-id>
<article-id pub-id-type="pmc">3748532</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2013.00541</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Pleasurable music affects reinforcement learning according to the listener</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Gold</surname>
<given-names>Benjamin P.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Frank</surname>
<given-names>Michael J.</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Bogert</surname>
<given-names>Brigitte</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Brattico</surname>
<given-names>Elvira</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki</institution>
<country>Helsinki, Finland</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Department of Music, Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä</institution>
<country>Jyväskylä, Finland</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Department of Cognitive, Linguistic and Psychological Sciences, Brown University</institution>
<country>Providence, RI, USA</country>
</aff>
<aff id="aff4">
<sup>4</sup>
<institution>Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University</institution>
<country>Espoo, Finland</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Robert J. Zatorre, McGill University, Canada</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Jessica A. Grahn, University of Western Ontario, Canada; Theodor Rueber, Bonn University Hospital, Germany</p>
</fn>
<corresp id="fn001">*Correspondence: Benjamin P. Gold, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki, PO Box 9, Siltavuorenpenger 1B, 00014 Helsinki, Finland e-mail:
<email xlink:type="simple">benjamin.gold@helsinki.fi</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Psychology.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>21</day>
<month>8</month>
<year>2013</year>
</pub-date>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<volume>4</volume>
<elocation-id>541</elocation-id>
<history>
<date date-type="received">
<day>01</day>
<month>3</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>31</day>
<month>7</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2013 Gold, Frank, Bogert and Brattico.</copyright-statement>
<copyright-year>2013</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Mounting evidence links the enjoyment of music to brain areas implicated in emotion and the dopaminergic reward system. In particular, dopamine release in the ventral striatum seems to play a major role in the rewarding aspect of music listening. Striatal dopamine also influences reinforcement learning, such that subjects with greater dopamine efficacy learn better to approach rewards while those with lesser dopamine efficacy learn better to avoid punishments. In this study, we explored the practical implications of musical pleasure through its ability to facilitate reinforcement learning via non-pharmacological dopamine elicitation. Subjects from a wide variety of musical backgrounds chose a pleasurable and a neutral piece of music from an experimenter-compiled database, and then listened to one or both of these pieces (according to pseudo-random group assignment) as they performed a reinforcement learning task dependent on dopamine transmission. We assessed musical backgrounds as well as typical listening patterns with the new Helsinki Inventory of Music and Affective Behaviors (HIMAB), and separately investigated behavior for the training and test phases of the learning task. Subjects with more musical experience trained better with neutral music and tested better with pleasurable music, while those with less musical experience exhibited the opposite effect. HIMAB results regarding listening behaviors and subjective music ratings indicate that these effects arose from different listening styles: namely, more affective listening in non-musicians and more analytical listening in musicians. In conclusion, musical pleasure was able to influence task performance, and the shape of this effect depended on group and individual factors. These findings have implications in affective neuroscience, neuroaesthetics, learning, and music therapy.</p>
</abstract>
<kwd-group>
<kwd>music</kwd>
<kwd>pleasure</kwd>
<kwd>reinforcement learning</kwd>
<kwd>reward</kwd>
<kwd>dopamine</kwd>
<kwd>subjectivity</kwd>
<kwd>musical experience</kwd>
<kwd>listening strategy</kwd>
</kwd-group>
<counts>
<fig-count count="9"></fig-count>
<table-count count="5"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="74"></ref-count>
<page-count count="19"></page-count>
<word-count count="15342"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<sec>
<title>From musical pleasure to reinforcement learning</title>
<p>The emotional power of music is evident from music downloads to band T-shirts, from film scores to music therapy, and from concert sales to any Friday or Saturday night out. Despite having no intrinsic biological or tangible value, music is profoundly important to people of all cultures and all walks of life (Sloboda and Juslin,
<xref ref-type="bibr" rid="B67">2001</xref>
); listening to music is consistently ranked as one of the most rewarding human experiences (Dubé and Le Bel,
<xref ref-type="bibr" rid="B24">2003</xref>
). Influential theories of emotion describe pleasure as an integral part of core affect (Lindquist et al.,
<xref ref-type="bibr" rid="B43">2012</xref>
) or survival functions (LeDoux,
<xref ref-type="bibr" rid="B41">2012</xref>
), and neuroimaging evidence links pleasurable music listening with brain areas implicated in emotion and the dopaminergic reward system (Blood et al.,
<xref ref-type="bibr" rid="B8">1999</xref>
; Blood and Zatorre,
<xref ref-type="bibr" rid="B7">2001</xref>
; Menon and Levitin,
<xref ref-type="bibr" rid="B44">2005</xref>
; Salimpoor et al.,
<xref ref-type="bibr" rid="B59">2011</xref>
). Accordingly, people primarily listen to music for emotion and mood regulation (Sloboda and O'Neill,
<xref ref-type="bibr" rid="B68">2001</xref>
; Saarikallio and Erkkilä,
<xref ref-type="bibr" rid="B58">2007</xref>
), suggesting a possible functional role of musical pleasure. Nonetheless, with implications in affective neuroscience, neuroaesthetics, and music therapy, the practical ramifications of musical pleasure remain unclear.</p>
<p>Can music direct reward-based decision making? Although the famous “Mozart effect” implies that music can temporarily influence cognitive performance (Rauscher et al.,
<xref ref-type="bibr" rid="B54">1993</xref>
), its functional relationship to reward processing has not yet been assessed. How do different people experience pleasure, and does it affect them differently? Musical emotions are highly subjective and preferences for certain musical pieces or genres vary widely across individuals (Rentfrow and Gosling,
<xref ref-type="bibr" rid="B55">2003</xref>
; Eerola and Vuoskoski,
<xref ref-type="bibr" rid="B25">2011</xref>
), yet these differences are often treated as random noise or emergent states (cf. Brown et al.,
<xref ref-type="bibr" rid="B10">2011</xref>
; Kühn and Gallinat,
<xref ref-type="bibr" rid="B40">2012</xref>
). We explored the reward implications of subjective musical pleasure through its ability to affect reward-based learning.</p>
<p>Reinforcement learning is driven by dopaminergic reward prediction errors that signal the discrepancy between expected and experienced action outcomes (Montague et al.,
<xref ref-type="bibr" rid="B46">1996</xref>
; Schultz,
<xref ref-type="bibr" rid="B63">2002</xref>
). Learning occurs as behavioral modifications reflect and ultimately minimize these prediction errors over time (Hollerman and Schultz,
<xref ref-type="bibr" rid="B33">1998</xref>
). In one model, the selection of rewarded actions is promoted by Hebbian potentiation of a direct D1-receptor “Go” pathway following phasic increases in dopamine, while action avoidance is achieved via potentiation of an indirect D2-receptor “NoGo” pathway following phasic decreases (Figure
<xref ref-type="fig" rid="F1">1</xref>
; Frank et al.,
<xref ref-type="bibr" rid="B30">2004</xref>
). Genetic, pharmacological, and neuropsychiatric research converge to show that learning and decision making are preferentially guided by rewards in subjects with greater striatal dopamine efficacy and preferentially guided by punishments in those with lesser dopamine efficacy (Frank et al.,
<xref ref-type="bibr" rid="B30">2004</xref>
,
<xref ref-type="bibr" rid="B28">2007a</xref>
; Jocham et al.,
<xref ref-type="bibr" rid="B37">2011</xref>
; Shiner et al.,
<xref ref-type="bibr" rid="B65">2012</xref>
).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Reinforcement learning model.</bold>
In the reinforcement learning model (from Frank et al.,
<xref ref-type="bibr" rid="B30">2004</xref>
), phasic increases in dopamine promote action selection in the thalamus via the D1-receptor “Go” pathway, whereas phasic decreases promote action avoidance via the D2-receptor “NoGo” pathway. Both processes originate in the striatum and receive cortical and subcortical inputs. SNc, substantia nigra pars compacta; GPi, internal segment of the globus pallidus; GPe, external segment of the globus pallidus; SNr, substantia nigra pars reticula.</p>
</caption>
<graphic xlink:href="fpsyg-04-00541-g0001"></graphic>
</fig>
</sec>
<sec>
<title>The neural bases of reinforcement learning and musical pleasure</title>
<p>The neural bases of reinforcement learning center around the striatum (especially the nucleus accumbens; NAc) and the ventromedial prefrontal cortex (vmPFC). Combining functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), Schott and colleagues (
<xref ref-type="bibr" rid="B62">2008</xref>
) found that reward anticipation corresponded to dopaminergic activity in the substantia nigra and the ventral tegmental area, whereas reward itself elicited dopamine release in the ventral striatum and especially the NAc. The magnitudes of the anticipatory and reward-related dopamine release were correlated, as the NAc is the target of dense projections from the ventral tegmental area. Many studies have also shown with fMRI that striatal and ventral tegmental learning activity reflect reward prediction errors (O'Doherty et al.,
<xref ref-type="bibr" rid="B49">2004</xref>
; Daw and Doya,
<xref ref-type="bibr" rid="B21">2006</xref>
; D'Ardenne et al.,
<xref ref-type="bibr" rid="B20">2008</xref>
; Caplin et al.,
<xref ref-type="bibr" rid="B12">2010</xref>
; Badre and Frank,
<xref ref-type="bibr" rid="B2">2012</xref>
). Moreover, such prediction error activity is modulated by dopaminergic drug administration and predictive of behavioral measures of learning (Jocham et al.,
<xref ref-type="bibr" rid="B37">2011</xref>
). This latter study also revealed vmPFC activity associated with both rewards and the learned values of rewarded stimuli, implicating this area in the tracking of learned reward values over time. Similarly, research with dopamine deficiency in Parkinson's disease showed that NAc and vmPFC activity during reinforcement learning correlated with the value of the chosen stimulus, but only in patients who had taken dopaminergic medications (Shiner et al.,
<xref ref-type="bibr" rid="B65">2012</xref>
).</p>
<p>Unsurprisingly, these reward areas correspond to those active in pleasurable music listening. Blood and colleagues (
<xref ref-type="bibr" rid="B8">1999</xref>
) first linked pleasant music to increased limbic activity in areas including the orbitofrontal cortex and vmPFC, and a subsequent investigation demonstrated that intensely pleasurable responses to music correlated with increased activity in the ventral striatum and thalamus and decreased activity in the amygdala, hippocampus, and vmPFC (Blood and Zatorre,
<xref ref-type="bibr" rid="B7">2001</xref>
). Salimpoor and colleagues (
<xref ref-type="bibr" rid="B59">2011</xref>
) further showed with PET and fMRI that pleasurable music listening activated the striatum, and that peak experiences of pleasure increased dopamine release in the NAc. Crucially, the authors confirmed that this activity represented dopamine transmission by demonstrating that peak pleasure epochs corresponded to peak NAc dopamine release. They also showed that increases in subjective pleasure were correlated with NAc activity even in the absence of the stereotypical “chills” responses they used to index intense pleasure, suggesting that this relationship extended beyond only peak pleasure experiences.</p>
<p>Dopamine is not directly related to hedonic experience, however, as discussed above it is related to positive deviations from expectation rather than reward
<italic>per se</italic>
(for reviews see Schultz,
<xref ref-type="bibr" rid="B63">2002</xref>
; Berridge and Kringelbach,
<xref ref-type="bibr" rid="B4">2013</xref>
). Notably, music generally evokes emotions through the manipulation of cognitive expectations, and pleasurable music is often pleasurable because of how it builds, meets, and defies these expectations (Meyer,
<xref ref-type="bibr" rid="B45">1956</xref>
; Huron,
<xref ref-type="bibr" rid="B34">2006</xref>
; Vuust and Kringelbach,
<xref ref-type="bibr" rid="B72">2010</xref>
). Indeed, “chills” strongly correlate with moments of expectancy violation (Sloboda,
<xref ref-type="bibr" rid="B66">1991</xref>
). These expectations can come from top–down explicit knowledge of a musical piece or from bottom-up implicit schematic predictions based on previous experiences within a musical genre or schema (Bharucha,
<xref ref-type="bibr" rid="B5">1994</xref>
; Huron,
<xref ref-type="bibr" rid="B34">2006</xref>
), which can account for some subjectivity of musical preferences and enjoyment of both familiar and unfamiliar music. Thus, the activity of the NAc during pleasurable music listening can be thought of as reward prediction errors, with pleasant musical surprises reflecting large positive errors. Consistent with this interpretation, an effective connectivity analysis of subjects listening to pleasant as opposed to scrambled musical excerpts revealed significant interactions between the NAc and the right middle temporal and superior temporal gyri (Menon and Levitin,
<xref ref-type="bibr" rid="B44">2005</xref>
), which are involved in the perception of schematic tonal structures (Zatorre et al.,
<xref ref-type="bibr" rid="B74">1994</xref>
). Moreover, a recent investigation showed that the subjective reward value of different musical pieces could be predicted by increased functional connectivity between the NAc and brain regions involved in auditory schematic processing, valuation, and emotional processing, suggesting that music enjoyment depends on previously stored acoustic information and the positive prediction errors that arise from these preconceptions (Salimpoor et al.,
<xref ref-type="bibr" rid="B61">2013</xref>
). While reward prediction errors are probably not the only cause of dopamine release during pleasurable music listening and more research is needed to substantiate this “predictive coding model” of aesthetic enjoyment (Van de Cruys and Wagemans,
<xref ref-type="bibr" rid="B71">2011</xref>
), the aforementioned studies suggest that the overlapping activation patterns of musical pleasure and reward-based learning may thus reflect a common reliance on reward prediction errors.</p>
</sec>
<sec>
<title>The origins of the idiosyncratic nature of musical pleasure</title>
<p>Musical expectations differ greatly from genre to genre and person to person, and musical preferences vary even more. Personality traits, intelligence, and various social factors can all influence musical tastes (Rentfrow and Gosling,
<xref ref-type="bibr" rid="B55">2003</xref>
; Chamorro-Premuzic and Furnham,
<xref ref-type="bibr" rid="B14">2007</xref>
; Chamorro-Premuzic et al.,
<xref ref-type="bibr" rid="B15">2012</xref>
). During listening, online differences in the perception of music or in music-directed attention could also affect musical preferences (Kantor-Martynuska and Fajkowska, in preparation).</p>
<p>Of the many interpersonal influences on musical preferences, past musical experience has received the most attention, with both informal musical activities and formal training corresponding to variations in perceptual, cognitive, and affective responses to music (e.g., Tervaniemi et al.,
<xref ref-type="bibr" rid="B69">2006</xref>
; Chapin et al.,
<xref ref-type="bibr" rid="B16">2010</xref>
; Dellacherie et al.,
<xref ref-type="bibr" rid="B22">2011</xref>
; Brattico and Pearce,
<xref ref-type="bibr" rid="B9">2012</xref>
; Oechslin et al.,
<xref ref-type="bibr" rid="B50">2012</xref>
; Seger et al.,
<xref ref-type="bibr" rid="B64">2013</xref>
). Musical experience also affects structural development, functional connectivity, and listening strategies at the neural level (Gaser and Schlaug,
<xref ref-type="bibr" rid="B31">2003</xref>
; Koeneke et al.,
<xref ref-type="bibr" rid="B39">2004</xref>
; Bengtsson et al.,
<xref ref-type="bibr" rid="B3">2005</xref>
; Chen et al.,
<xref ref-type="bibr" rid="B18">2008</xref>
; Hyde et al.,
<xref ref-type="bibr" rid="B35">2009</xref>
; for reviews see Rodrigues et al.,
<xref ref-type="bibr" rid="B57">2010</xref>
; Levitin,
<xref ref-type="bibr" rid="B42">2012</xref>
). For example, music experts tend to describe musical aesthetics with music-specific adjectives (such as melodic, rhythmic, and harmonic) whereas non-musicians rely more on emotion-related adjectives (Istók et al.,
<xref ref-type="bibr" rid="B36">2009</xref>
). Electrophysiological evidence also suggests that music experts utilize more analytical strategies than non-musicians when giving aesthetic judgments of chord sequences, while the latter instead respond more emotionally (Müller et al.,
<xref ref-type="bibr" rid="B47">2010</xref>
). Yet while musical expertise is associated with greater engagement in music as a primary focus, musicians are not necessarily more likely to be distracted by music (Kantor-Martynuska and Fajkowska, in preparation). Notably, many of these effects are correlational, meaning that they form a spectrum along musical experience from non-musicians to amateur musicians to musicians (Gaser and Schlaug,
<xref ref-type="bibr" rid="B31">2003</xref>
; Tervaniemi et al.,
<xref ref-type="bibr" rid="B69">2006</xref>
; Hyde et al.,
<xref ref-type="bibr" rid="B35">2009</xref>
; Oechslin et al.,
<xref ref-type="bibr" rid="B50">2012</xref>
).</p>
</sec>
<sec>
<title>Study aims and hypotheses</title>
<p>In the present study, we first aimed to assess the practical implications of musically elicited dopamine by determining whether musical pleasure could facilitate reinforcement learning via non-pharmacological dopamine elicitation. To this end, we played pleasurable and neutral music for participants during a reinforcement learning task dependent on dopamine transmission (Frank et al.,
<xref ref-type="bibr" rid="B30">2004</xref>
). Given the emotional power of music and its capacity to activate the mesocorticolimbic reward system, we expected musical pleasure to influence reinforcement learning by evoking a dopaminergic response that would enhance appetitive behaviors.</p>
<p>We also expected music's influence to depend on the musical background and listening patterns of the individual. We combined pre-existing and novel self-report measures to objectively identify individual musical experiences with a new Helsinki Inventory of Music and Affective Behaviors (HIMAB). With this, we sought to explore the relationships between subjective musical pleasure, diverse musical backgrounds, and music listening patterns. During the learning paradigm, we hypothesized that musically inexperienced subjects would be more emotionally affected by the music they enjoyed and thus benefit more from listening to it during the task, whereas more musically experienced subjects would think about the music more analytically during learning and thus divert focus from the learning task.</p>
</sec>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Subjects</title>
<p>This experiment was approved by the local ethics committee of the University of Helsinki. Ninety volunteers (33 males, mean age = 27.5 ± 6.0 years) participated. They had no hearing or neurological disorders, spoke and read English fluently, gave informed consent, and received “culture passes” of monetary value in compensation for their time. Seventeen of these volunteers (18.9%) failed to perform significantly above chance by the end of training, and so the data described hereafter pertain to the remaining 73 (26 males, mean age = 27.1 ± 5.8 years).</p>
<p>We grouped subjects according to the music they would hear during the training and test phases of the reinforcement learning task. This process was pseudo-random in order to ensure that each group had similar distributions of musical experience (Table
<xref ref-type="table" rid="T1">1</xref>
). The “NP” group listened to neutral music as they learned and pleasurable music as they generalized their knowledge to the test. The opposite group, “PN,” listened to pleasurable music during training and neutral music during the test. To control for learning degradation due to state dependencies (Overton,
<xref ref-type="bibr" rid="B51">1966</xref>
), we included two groups that listened to the same music for both training and testing (“NN” and “PP”). The presence of music in general likely distracted participants and worsened overall task performance, but this experiment specifically concerned the comparison of different emotional responses to music. As such, we examined only within-music effects.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Experimental groups</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Group</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Subjects</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Mean playing years ±
<italic>SD</italic>
</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Playing years range</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Mean weekly listening ±
<italic>SD</italic>
(h)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Weekly listening range (h)</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Mean age ±
<italic>SD</italic>
(years)</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">NN</td>
<td align="left" rowspan="1" colspan="1">19 (7 male)</td>
<td align="left" rowspan="1" colspan="1">7.3 ± 6.9</td>
<td align="left" rowspan="1" colspan="1">0–19</td>
<td align="left" rowspan="1" colspan="1">19.6 ± 15.8</td>
<td align="left" rowspan="1" colspan="1">2.5–70</td>
<td align="left" rowspan="1" colspan="1">27.6 ± 6.0</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">NP</td>
<td align="left" rowspan="1" colspan="1">19 (8 male)</td>
<td align="left" rowspan="1" colspan="1">9.1 ± 8.2</td>
<td align="left" rowspan="1" colspan="1">0–26</td>
<td align="left" rowspan="1" colspan="1">18.4 ± 17.8</td>
<td align="left" rowspan="1" colspan="1">2.5–70</td>
<td align="left" rowspan="1" colspan="1">27.8 ± 6.7</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">PN</td>
<td align="left" rowspan="1" colspan="1">18 (5 male)</td>
<td align="left" rowspan="1" colspan="1">9.0 ± 7.6</td>
<td align="left" rowspan="1" colspan="1">0–24</td>
<td align="left" rowspan="1" colspan="1">19.1 ± 24.9</td>
<td align="left" rowspan="1" colspan="1">2–110</td>
<td align="left" rowspan="1" colspan="1">26.0 ± 3.2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">PP</td>
<td align="left" rowspan="1" colspan="1">17 (5 male)</td>
<td align="left" rowspan="1" colspan="1">12.2 ± 9.8</td>
<td align="left" rowspan="1" colspan="1">2–39</td>
<td align="left" rowspan="1" colspan="1">14.3 ± 15.9</td>
<td align="left" rowspan="1" colspan="1">2 – 49</td>
<td align="left" rowspan="1" colspan="1">26.7 ± 6.7</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Total</td>
<td align="left" rowspan="1" colspan="1">73 (26 male)</td>
<td align="left" rowspan="1" colspan="1">9.3 ± 8.2</td>
<td align="left" rowspan="1" colspan="1">0–39</td>
<td align="left" rowspan="1" colspan="1">17.9 ± 18.7</td>
<td align="left" rowspan="1" colspan="1">2–110</td>
<td align="left" rowspan="1" colspan="1">27.1 ± 5.8</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>To simplify our sample and investigate musicianship more closely, we also classified subjects according to their musical backgrounds. We defined musicians as participants who had earned a music degree and/or received compensation for performing music with at least 5 years of recent (within the last 5 years) and weekly playing or singing experience; this experiment included 23 such musicians. Amateur musicians had between 1 and 5 years of recent and weekly musical experience or more than 5 years of experience (potentially including a music degree) that had not been recent and/or at least weekly; there were 22 amateur musicians in this study. Non-musicians had fewer than 5 years of musical experience that was not recent and/or weekly; we analyzed data from 28 non-musicians. Table
<xref ref-type="table" rid="T2">2</xref>
provides more information about the musical backgrounds in this experiment.</p>
<table-wrap id="T2" position="float">
<label>Table 2</label>
<caption>
<p>
<bold>Musical backgrounds</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" colspan="2" rowspan="1">
<bold>Classification</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Playing years</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Years ago</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Pro/student years</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Years ago</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Musicians</td>
<td align="left" rowspan="1" colspan="1">Mean ± SD</td>
<td align="left" rowspan="1" colspan="1">16.8 ± 6.9</td>
<td align="left" rowspan="1" colspan="1">2.7 ± 5.5</td>
<td align="left" rowspan="1" colspan="1">8.9 ± 4.8</td>
<td align="left" rowspan="1" colspan="1">2.2 ± 5.2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Amateurs</td>
<td align="left" rowspan="1" colspan="1">Mean ± SD</td>
<td align="left" rowspan="1" colspan="1">11.4 ± 5.0</td>
<td align="left" rowspan="1" colspan="1">6.7 ± 8.1</td>
<td align="left" rowspan="1" colspan="1">1.9 ± 3.2</td>
<td align="left" rowspan="1" colspan="1">7.6 ± 5.2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Non-musicians</td>
<td align="left" rowspan="1" colspan="1">Mean ± SD</td>
<td align="left" rowspan="1" colspan="1">1.6 ± 2.2</td>
<td align="left" rowspan="1" colspan="1">10.4 ± 6.2</td>
<td align="left" rowspan="1" colspan="1">0.0 ± 0.0</td>
<td align="left" rowspan="1" colspan="1">N/A</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Listening test</title>
<p>Prior to the experimental task, each subject was required to complete a listening test at home. This test involved listening to and rating 14 songs from an experimenter-created list of instrumental film score pieces (Table
<xref ref-type="table" rid="TA1">A1</xref>
in Appendix) that were sent to the subjects via online file sharing upon their consent to participate in the experiment. The musical pieces came from a database previously rated by 116 listeners (Eerola and Vuoskoski,
<xref ref-type="bibr" rid="B25">2011</xref>
) and were chosen for this experiment because of similar valence (mean rating = 5.56 out of 9 ± 0.80), energy (mean rating = 2.61 out of 9 ± 0.61), and tension (mean rating = 2.33 out of 9 ± 0.81) ratings by those listeners. Subjects in the present study evaluated the familiarity, pleasantness, and arousal of each piece on five-point Likert scales as they listened, repeating each piece until they were satisfied with their ratings. They then chose their three favorite pieces and three pieces about which they felt completely neutral from the list. Using their ratings (i.e., selecting pieces with similar affective ratings) and excluding any songs they explicitly recognized, we chose one of their favorite pieces to be their pleasurable music and one of their neutral pieces to be their neutral music during the experiment. We also ensured that each piece was used both as pleasurable music and as neutral music; for the 73 subjects, each piece served as pleasurable music an average of 5.21 ± 2.89 times, and as neutral music 5.21 ± 2.49 times. This way, each subject's pleasurable music was another subject's neutral music and vice-versa. We compared the pleasurable music and neutral music ratings with paired-samples
<italic>t</italic>
-tests and found that, in spite of our attempt to match the affective ratings of the pleasurable and neutral pieces of music, subjects rated their pleasurable music higher in each category (familiarity, pleasantness, and arousal; all
<italic>p</italic>
s < 0.05; Table
<xref ref-type="table" rid="T3">3</xref>
). In addition, independent samples
<italic>t</italic>
-tests revealed that non-musicians rated the pleasantness of their pleasurable music higher (mean rating = 4.71 ± 0.46) than musicians did (mean rating = 4.39 ± 0.58;
<italic>t</italic>
<sub>(49)</sub>
= −2.21,
<italic>p</italic>
< 0.05). We accounted for these differences by using the listening test ratings as covariates in repeated-measures analyses of covariance (ANCOVAs). This way, any main effects or interactions we observed with respect to musical conditions reflected subjective differences of music enjoyment and not familiarity, pleasantness, or arousal. We also performed multiple linear regression analyses with the ratings as regressors to examine the influence these ratings had on task performance.</p>
<table-wrap id="T3" position="float">
<label>Table 3</label>
<caption>
<p>
<bold>Listening test results</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>
<italic>T</italic>
-test</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Pleasurable music mean rating ± standard deviation</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Neutral music mean rating ± standard deviation</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>T
<sub>(72)</sub>
</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>
<italic>P</italic>
-value</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Familiarity</td>
<td align="left" rowspan="1" colspan="1">2.30 ± 1.27</td>
<td align="left" rowspan="1" colspan="1">1.95 ± 1.09</td>
<td align="left" rowspan="1" colspan="1">4.05</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
= 0.0001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Pleasantness</td>
<td align="left" rowspan="1" colspan="1">4.53 ± 0.58</td>
<td align="left" rowspan="1" colspan="1">2.96 ± 0.79</td>
<td align="left" rowspan="1" colspan="1">15.28</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
< 0.0001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Arousal</td>
<td align="left" rowspan="1" colspan="1">2.95 ± 1.21</td>
<td align="left" rowspan="1" colspan="1">2.56 ± 0.88</td>
<td align="left" rowspan="1" colspan="1">2.59</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
< 0.05</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Probabilistic selection paradigm</title>
<p>The probabilistic selection (PS) task (Figure
<xref ref-type="fig" rid="F2">2</xref>
), adapted from Frank and colleagues (
<xref ref-type="bibr" rid="B30">2004</xref>
), took place at the University of Helsinki. Subjects sat in a soundproof room approximately one meter from a computer monitor while the experiment was delivered using Presentation software (Neurobehavioral Systems, Ltd.). The pre-selected musical pieces played binaurally through headphones at a comfortable intensity, and each piece looped to ensure that music played throughout the entire durations of the training and test phases of the task. The visual stimuli were Japanese Hiragana characters.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Probabilistic selection (PS) task. (A)</bold>
Each trial in the PS task began with a jittered fixation cross followed by a pair of stimuli for 2500 ms. Following a left or right button response, the selected image appeared highlighted on the screen for the duration of the 2500 ms presentation. Choices during training then received probabilistic feedback, whereas those during testing were followed by the fixation cross marking the next trial.
<bold>(B)</bold>
In the training phase, participants learned to choose between three discrete pairs of Japanese Hiragana characters with different reward contingencies. Each pair had a better and worse choice, but the relative weights of these values changed. The reward probabilities of each stimulus are shown in parentheses.
<bold>(C)</bold>
In the test phase, participants generalized their knowledge of the training pairs to recombined stimulus pairs. There was no feedback in this phase. Learning to choose A over B during training could reflect approach learning, avoidance learning, or both, and so we assessed overall test performance as well as the accuracy of
<bold>(A)</bold>
choices and
<bold>(B)</bold>
avoidances when these stimuli appeared in novel pairs during testing.</p>
</caption>
<graphic xlink:href="fpsyg-04-00541-g0002"></graphic>
</fig>
<p>This task had two phases. In the training phase, three different image pairs appeared on the screen in random order. Each image had a different probabilistic chance of reward. The pairs (and their reward contingencies) were termed AB (image A had an 80% probability of reward and image B had a 20% probability), CD (70 and 30%), and EF (60 and 40%). The image pair appeared for 2500 ms after a jittered fixation cross of 500, 750, or 1000 ms, and presentations were counterbalanced so that each image occurred just as often on each side of the screen. Although the subjects had no prior information about the stimuli (screening ensured no experience with the Japanese language), they were instructed to choose between them with the left or right button on a button box. If the subject failed to respond within the allotted time window, white text reading “No Response” appeared on the screen. Otherwise, upon the event of a button press, a white rectangle appeared around the selected image for the remainder of the 2500-ms stimulus duration. Subjects then received either “correct” (a green smiley face) or “incorrect” (a red sad face) feedback for 400 ms based on a random draw of the images' inherent reward contingencies.</p>
<p>Before starting the training phase, subjects acclimated themselves to the paradigm, the music volume, and the button box with eight practice trials. These were identical to the training trials, except they used different Hiragana characters and reward certainties (100 or 0%) instead of probabilities. The four images in the practice session each appeared twice, once on each side of the screen, in discrete pairings termed WX and YZ. When the practice session finished, the experimenter ensured that the subject understood the task and that the music intensity was comfortable, and offered to answer any questions about the paradigm. The training phase began when the participant was ready. Training was divided into three blocks of 54 stimulus pairs each with participant-paced rest breaks in between. With this design, subjects encountered each stimulus pair 18 times in each training block.</p>
<p>Learning to choose A over B involves learning that choosing A results in positive feedback (approach or “Go” learning), that choosing B results in negative feedback (avoidance or “NoGo” learning), or both. The test phase of this task thus assessed the extent to which participants had learned about the positive and negative outcomes of their choices and were able to transfer or generalize this knowledge. The stimuli from the training phase were recombined such that all 15 possible pairings occurred during the test. The test consisted of 90 trials without feedback, with all image pairs occurring six times (three times in each order).</p>
</sec>
<sec>
<title>Helsinki inventory of music and affective behaviors (HIMAB)</title>
<p>Subjects completed the (HIMAB; Table
<xref ref-type="table" rid="TA2">A2</xref>
in Appendix) either before or after the PS task. Although most subjects did this at home, the time and location of HIMAB administration depended on the subject's availability. Since this inventory reflects previous musical experiences and typical listening patterns, we do not suspect that answers were affected by different response contexts. In addition, the experimenter was available for questions even when the inventory was done at home and before the subjects submitted their responses.</p>
<p>The first component of the HIMAB assesses musical experience with questions regarding the intensity, regularity, duration, and time since any musical training, professional musical experience, or working toward a musical degree. We used these questions to derive the variable “Playing Years,” as a measure of how many years each subject played/has played music (including singing). A question on the frequency of music listening represented the “Weekly Listening Hours” variable, which measured any and all kinds of music listening behavior in a typical week. The rest of the inventory corresponds to continuous variables for covariance and regression analyses. Several of these variables came from three pre-existing scales. The Music Consumption Scale (“music consumption”) quantifies how much live music the subject hears and purchases/downloads on a regular basis (Chamorro-Premuzic et al.,
<xref ref-type="bibr" rid="B15">2012</xref>
). The Uses of Music Inventory (UMI; Chamorro-Premuzic and Furnham,
<xref ref-type="bibr" rid="B14">2007</xref>
) assesses the extent to which the subject uses music for emotional, cognitive, and social/background purposes (“emotional use of music,” “cognitive use of music,” and “background use of music”). The Music-Directed Attention Scale (MDAS; Kantor-Martynuska and Fajkowska, in preparation) measures the subject's tendency to have music divert attention from tasks of primary focus (“music distractibility”) and the extent of the subject's engagement in music when it is the primary focus (“music engagement”).</p>
<p>“Music importance,” “active listening,” and “passive listening” were novel variables in the HIMAB. For “music importance,” subjects rated on a seven-point Likert scale (from “Not at all important” to “Very important”) how important music is in their daily lives. Whereas people may listen to or consume music to various extents and for various reasons, “music importance” describes how significant music is on a personal and daily basis and distinguishes, for example, someone who just happens to hear their coworkers' music every day from someone who would miss it if it were absent. For “active listening,” we asked subjects to rate on a seven-point Likert scale how often they listen to music without doing anything else. This variable quantifies the amount of time subjects devote to focused music listening regardless of how important or engaging they might find it. Finally, “passive listening” complements “active listening” by quantifying on the same seven-point Likert scale the amount of time that subjects listen to music while engaged in another activity. Responses throughout the inventory were binary choice, written, or five- or seven-point Likert scales with elaboration available for most questions. Taken together, these variables aimed to comprehensively describe the typical music listening practices of our subjects.</p>
</sec>
<sec>
<title>Statistical tests</title>
<p>We analyzed performance in the PS task with repeated-measures ANCOVAs using accuracy and correct-trial reaction times as dependent variables. We defined accuracy as the proportion of trials in which the subject chose the image with the higher probability of reward, and reaction times as the amount of time between the stimulus onset and the subject's first button press. The individual factors from the HIMAB (music importance, music consumption, emotional use of music, cognitive use of music, background use of music, music distractibility, music engagement, active listening, and passive listening) and the subjective ratings from the listening test (the familiarity, pleasantness, and arousal of the pleasurable and neutral music, treated as six separate variables) served as covariates. Musical Condition (pleasurable and neutral) was a between-subjects factor for both phases, and the musical experience variables Playing Years and Weekly Listening Hours were covariates of interest modeled over the whole sample and then individually for each Musical Condition. In this way, we studied musical experience with continuous variables in order to avoid the problematic classification of musicianship according to arbitrary definitions and the limited statistical power of tests conducted on small musicianship groups. Nonetheless, distinguishing subjects according to their musical backgrounds can simplify and clarify the effects of musical experience, and so we used musicianship categories for these purposes only.</p>
<p>For the training phase, we investigated the process of learning by using Training Block (Block 1, Block 2, and Block 3) and Image Pair (AB, CD, and EF) as within-subjects factors. For the test phase, we defined approach learning as the accurate selection of the most rewarded stimulus, A, whenever it was presented as part of a novel pair (AC, AD, AE, and AF, or “Choose A”) and avoidance learning as the accurate selection of stimuli other than the most frequently punished stimulus, B, whenever it was presented as part of a novel pair (BC, BD, BE, and BF, or “Avoid B”). These measures have repeatedly exhibited differential sensitivities to dopaminergic manipulations (Frank et al.,
<xref ref-type="bibr" rid="B30">2004</xref>
,
<xref ref-type="bibr" rid="B28">2007a</xref>
,
<xref ref-type="bibr" rid="B29">b</xref>
; Jocham et al.,
<xref ref-type="bibr" rid="B37">2011</xref>
), and so we analyzed accuracy and correct-trial reaction times both for the test phase as a whole and for Choose A/Avoid B conditions in particular. We also investigated the effects of switching and keeping musical conditions between the training and test phases. Hence, Test Condition (Choose A and Avoid B) was a within-subjects factor and Group (NP, PN, NN, and PP) was a between-subjects factor for the test. Finally, we performed planned contrasts using least squared difference tests with Tukey corrections for multiple comparisons and
<italic>post-hoc</italic>
pairwise comparisons on each significant ANCOVA.</p>
<p>Early behavior in the PS task often includes reacting to the last reward or punishment for a certain stimulus pair by explicitly remembering the event and either seeking it again (“win-stay”) or trying to avoid it (“lose-switch”) the next time it appears (Frank et al.,
<xref ref-type="bibr" rid="B28">2007a</xref>
). This process involves storing previous behaviors and their outcomes in working memory while learning about intervening trials with other stimuli. Although this strategy can be helpful at first, it ultimately proves ineffective due to the probabilistic nature of the task. As such, most subjects abandon it early in training (Frank et al.,
<xref ref-type="bibr" rid="B28">2007a</xref>
). Even so, working memory recruitment could account for differences in task performance, and so we analyzed the frequency of “win-stay” and “lose-switch” choices in the first third of the first training block (18 trials), during which each image pair appeared approximately three times in each order. For this ANCOVA, win-stay/lose-switch frequency was the dependent variable and Musical Condition (pleasurable and neutral) was a between-subjects factor while Playing Years and Weekly Listening Hours were covariates of interest. We also measured baseline performance levels during these trials with ANCOVAs for which accuracy and reaction times were dependent variables and the aforementioned HIMAB variables (music importance, music consumption, emotional use of music, cognitive use of music, background use of music, music distractibility, music engagement, active listening, and passive listening) and the subjective ratings from the listening test (the familiarity, pleasantness, and arousal of the pleasurable and neutral music) were covariates. Playing Years and Weekly Listening Hours were covariates of interest, and we conducted separate
<italic>post-hoc</italic>
models within each Musical Condition of any significant musical experience-mediated Musical Condition effects. We further explored the relationships between individual musical experiences and PS task performance with multiple linear regression analyses on accuracy and correct-trial reaction times in the training and test phases. The HIMAB variables and listening test ratings served as regressors.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>General learning in the probabilistic selection paradigm</title>
<p>Seventy-three subjects learned the task significantly above chance as demonstrated by their performance in the third training block [mean accuracy = 77.25 ± 11.81%, single sample
<italic>t</italic>
-test
<italic>t</italic>
<sub>(72)</sub>
= 5.24,
<italic>p</italic>
< 0.0001]. Significant main effects of Training Block [
<italic>F</italic>
<sub>(2,88)</sub>
= 40.55,
<italic>p</italic>
< 0.0001] and Image Pair [
<italic>F</italic>
<sub>(2, 142)</sub>
= 5.25,
<italic>p</italic>
< 0.01] confirmed that subjects were more accurate in later blocks and with easier (e.g., 80%/20% vs. 60%/40%) pairs. Learning was also evident from reaction times, with subjects responding significantly faster in later training blocks [
<italic>F</italic>
<sub>(2, 88)</sub>
= 57.73,
<italic>p</italic>
< 0.0001] and with easier pairs [
<italic>F</italic>
<sub>(2, 142)</sub>
= 11.69,
<italic>p</italic>
< 0.0001]. Figure
<xref ref-type="fig" rid="F3">3</xref>
illustrates overall performance on the PS task.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Probabilistic selection task performance summary.</bold>
Box plots with quartiles (upper values 75%, medians 50%, and lower values 25%). The whiskers show the range of the data, with no outliers.
<bold>(A)</bold>
Overall accuracy in training and testing for all subjects.
<bold>(B)</bold>
Overall reaction times in training and testing for all subjects.</p>
</caption>
<graphic xlink:href="fpsyg-04-00541-g0003"></graphic>
</fig>
<p>In the test phase, a pairwise comparison (adjusted
<italic>p</italic>
< 0.05) on a main effect of Test Condition [
<italic>F</italic>
<sub>(1, 69)</sub>
= 5.94,
<italic>p</italic>
< 0.05] showed that subjects were significantly more accurate at avoiding B (mean = 68.46 ± 21.06%) than choosing A (mean = 60.02 ± 23.72%). All other test effects varied according to Playing Years, Weekly Listening Hours, Musical Condition, and/or Group, and are thus reported below.</p>
</sec>
<sec>
<title>Effects of musical pleasure on reinforcement learning</title>
<p>The music that subjects listened to during the training phase of the PS task did not significantly affect working memory recruitment as measured by win-stay/lose-switch behavior at the beginning of the phase (
<italic>p</italic>
> 0.69). Nonetheless, the musical manipulation shaped training performance considerably. Musical Condition did not have an immediate effect on accuracy at the beginning of the training phase (
<italic>p</italic>
> 0.65), but it did influence accuracy throughout training as a whole [
<italic>F</italic>
<sub>(1, 18)</sub>
= 7.71,
<italic>p</italic>
= 0.01]. This result suggests that subjects were more accurate when listening to pleasurable music (mean = 70.24 ± 25.75%) than neutral music (mean = 69.94 ± 25.07%), but a planned comparison of this effect was not significant. Response rates, alternatively, varied according to the music heard during even the beginning of training, with a significant main effect of Musical Condition on initial training reaction times [
<italic>F</italic>
<sub>(1, 18)</sub>
= 8.20,
<italic>p</italic>
= 0.01]. A planned comparison for this effect also failed to reach significance. However, a significant main effect of Musical Condition on reaction times throughout training [
<italic>F</italic>
<sub>(1, 18)</sub>
= 19.53,
<italic>p</italic>
< 0.0005] showed that subjects listening to the music they rated as pleasurable responded faster (mean = 1158 ± 340 ms) than those listening to the music they rated as neutral (mean = 1198 ± 333 ms; Tukey-Kramer adjusted
<italic>p</italic>
= 0.01).</p>
<p>There was also a trend main effect of Musical Condition on test reaction times [
<italic>F</italic>
<sub>(1, 21)</sub>
= 3.43,
<italic>p</italic>
= 0.08] suggesting that subjects also responded faster during the test when they listened to pleasurable music (mean = 1149 ± 249 ms) compared to neutral music (mean = 1195 ± 274 ms), but a planned contrast of this effect was not significant. Planned comparisons on a significant Test Condition by Group interaction on Choose A/Avoid B accuracy [
<italic>F</italic>
<sub>(3, 69)</sub>
= 3.09,
<italic>p</italic>
< 0.05; Figure
<xref ref-type="fig" rid="F4">4</xref>
] showed that the groups were equally adept at Choosing A (NN mean = 63.73 ± 21.07%, NP mean = 58.77 ± 26.86%, PN mean = 58.56 ± 22.02%, PP mean = 58.82 ± 26.14%, all adjusted
<italic>p</italic>
s > 0.99), but differed in Avoid B accuracy (NN mean = 54.68 ± 25.20%, NP mean = 76.75 ± 17.25%, PN mean = 73.38 ± 19.91%, PP mean = 69.36 ± 13.81%). Specifically, pairwise comparisons of Avoid B accuracy revealed that the NN group performed significantly worse than both the NP (adjusted
<italic>p</italic>
< 0.005) and the PN (adjusted
<italic>p</italic>
< 0.05) groups in tests of avoidance learning.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Test Condition by Group interaction on test accuracy.</bold>
There was a significant Test Condition by Group interaction (
<italic>p</italic>
< 0.05). Subjects did not differ in approach (Choose A) accuracy during the test, but subjects who listened to neutral music during both training and testing (NN) avoided B less accurately than those who listened to neutral music during training and pleasurable music during testing (NP; adjusted
<italic>p</italic>
< 0.005) and those who listened to pleasurable music during training and neutral music during testing (NP; adjusted
<italic>p</italic>
< 0.05). Bars depict the mean accuracy for each Group in Choose A and Avoid B conditions, plus or minus the standard error of the mean. PP, subjects who listened to pleasurable music during both training and testing.
<sup>*</sup>
<italic>p</italic>
< 0.05;
<sup>**</sup>
<italic>p</italic>
< 0.005.</p>
</caption>
<graphic xlink:href="fpsyg-04-00541-g0004"></graphic>
</fig>
</sec>
<sec>
<title>Effects of musical backgrounds on music-mediated reinforcement learning</title>
<p>Repeated-measures ANCOVAs with planned comparisons demonstrated several instances in which musical experience modulated the effects of musical pleasure on reinforcement learning. Although musical experiences did not significantly affect accuracy in the beginning of training (all
<italic>p</italic>
s > 0.17), there was a significant interaction between Playing Years and Musical Condition on accuracy throughout training [
<italic>F</italic>
<sub>(10, 18)</sub>
= 5.91,
<italic>p</italic>
< 0.001]. Looking at pleasurable and neutral music separately, we found that accuracy correlated negatively with Playing Years at a trend level for pleasurable music (Beta = −0.08,
<italic>p</italic>
= 0.07) and significantly and positively with Playing Years for neutral music (β = 0.08,
<italic>p</italic>
< 0.05). As such, subjects with more musical experience were generally less accurate when they listened to pleasurable music and more accurate when they listened to neutral music (Figure
<xref ref-type="fig" rid="F5">5</xref>
). A significant interaction between Weekly Listening Hours and Musical Condition on training accuracy [
<italic>F</italic>
<sub>(12, 10)</sub>
= 4.03,
<italic>p</italic>
< 0.05] did not have significant correlations within the separate Musical Conditions (all
<italic>p</italic>
s > 0.12).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Playing Years by Musical Condition interaction on training accuracy.</bold>
There was a significant Playing Years by Musical Condition interaction on training accuracy (
<italic>p</italic>
< 0.001). Subjects with more years of musical experience were significantly more accurate when they listened to neutral music (
<italic>p</italic>
< 0.05), and there was a trend effect of more musically experienced subjects performing less accurately with pleasurable music (
<italic>p</italic>
= 0.07).
<sup>+</sup>
<italic>p</italic>
< 0.10;
<sup>*</sup>
<italic>p</italic>
< 0.05.</p>
</caption>
<graphic xlink:href="fpsyg-04-00541-g0005"></graphic>
</fig>
<p>The effects of Musical Condition were also modulated by musical experience in terms of reaction times. Already in the first 18 training trials, there was a significant Playing Years by Musical Condition interaction on reaction times [
<italic>F</italic>
<sub>(10, 18)</sub>
= 6.31,
<italic>p</italic>
< 0.0005]. Pairwise comparisons on this interaction did not reach significance, but a similar significant interaction between Playing Years and Musical Condition on training reaction times [
<italic>F</italic>
<sub>(10, 18)</sub>
= 15.92,
<italic>p</italic>
< 0.0001; Figure
<xref ref-type="fig" rid="F6">6</xref>
] revealed that subjects with more musical experience responded faster during neutral music listening (β = −0.19,
<italic>p</italic>
< 0.0001). There was no significant correlation within pleasurable music listening (
<italic>p</italic>
> 0.95).
<italic>Post-hoc</italic>
analyses of a significant interaction between Weekly Listening Hours and Musical Condition on training reaction times [
<italic>F</italic>
<sub>(12, 10)</sub>
= 15.21,
<italic>p</italic>
< 0.0001] failed to yield any significant correlations (all
<italic>p</italic>
s > 0.21).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Playing Years by Musical Condition interaction on training reaction times.</bold>
There was a significant Playing Years by Musical Condition interaction on training reaction times (
<italic>p</italic>
< 0.0001). Within neutral music listening, more musically experienced subjects exhibited faster reaction times (
<italic>p</italic>
< 0.0001). There was no significant correlation within pleasurable music listening (
<italic>p</italic>
> 0.95). N.S., not significant;
<sup>****</sup>
:
<italic>p</italic>
< 0.0001.</p>
</caption>
<graphic xlink:href="fpsyg-04-00541-g0006"></graphic>
</fig>
<p>There were no musical experience by Musical Condition interactions regarding test accuracy (all
<italic>p</italic>
s > 0.10), but test reaction times exhibited many such effects. A significant Playing Years by Musical Condition interaction [
<italic>F</italic>
<sub>(7, 21)</sub>
= 3.25,
<italic>p</italic>
< 0.05] did not yield any significant correlations when we examined pleasurable and neutral music separately, but a related significant Playing Years by Group interaction [
<italic>F</italic>
<sub>(17, 9)</sub>
= 5.15,
<italic>p</italic>
< 0.01] examined
<italic>post-hoc</italic>
within each group demonstrated a significant positive correlation between Playing Years and test reaction times within the NN group (β = 1.26,
<italic>p</italic>
< 0.001). There were no other significant correlations for this interaction (all
<italic>p</italic>
s > 0.29), suggesting that both this effect and the aforementioned Playing Years by Musical Condition interaction were driven by more musically experienced subjects responding slower when they listened to neutral music during both training and testing. There was a significant interaction between Weekly Listening Hours and Musical Condition [
<italic>F</italic>
<sub>(10, 12)</sub>
= 2.83,
<italic>p</italic>
< 0.05], for which revealed a negative correlation between Weekly Listening Hours and test reaction times within pleasurable music listening was significant at a trend level (β = −0.36,
<italic>p</italic>
= 0.08), but there was no significant relationship within neutral music listening (
<italic>p</italic>
> 0.92). Exploration of a significant Weekly Listening Hours by Group interaction [
<italic>F</italic>
<sub>(20, 16)</sub>
= 10.62,
<italic>p</italic>
< 0.0001; Figure
<xref ref-type="fig" rid="F7">7</xref>
] helped elucidate this effect, exhibiting a significant negative correlation between Weekly Listening Hours and test reaction times within the NP group (β = −0.69,
<italic>p</italic>
< 0.01), a trend positive correlation within the NN group (β = 0.48,
<italic>p</italic>
= 0.08), and no other significant relationships (all
<italic>p</italic>
s > 0.37). Together, these findings indicate that subjects who listen to music more frequently were likely to respond faster if they heard pleasurable music during testing (especially if they had already heard neutral music during training) and slower if they heard neutral music during both training and testing. In other words, more avid music listeners were generally fastest during the test if they were in the NP group, and slowest if they were in the NN group.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption>
<p>
<bold>Weekly Listening Hours by Group interaction on test reaction times.</bold>
There was a significant Weekly Listening Hours by Group interaction on test reaction times (
<italic>p</italic>
< 0.0001). Subjects who listened to music more frequently responded faster when they trained with neutral music and tested with pleasurable music (NP;
<italic>p</italic>
< 0.01). There was also a trend correlation such that these subjects responded slower when they listened to neutral music during both training and testing (NN,
<italic>p</italic>
= 0.08). No other within-group correlations were significant (all
<italic>p</italic>
s > 0.37). PN: subjects who listened to pleasurable music during training and neutral music during testing; PP: subjects who listened to pleasurable music during both training and testing. N.S., not significant;
<sup>+</sup>
<italic>p</italic>
< 0.10;
<sup>**</sup>
<italic>p</italic>
< 0.01.</p>
</caption>
<graphic xlink:href="fpsyg-04-00541-g0007"></graphic>
</fig>
<p>As a whole, these reaction time and accuracy effects on training and testing demonstrate that subjects with more music playing and/or listening experience learned better with neutral music but tested better with pleasurable music, with test performance more affected by the music heard during test than that heard during training (NP > PP > PN > NN). From another perspective, subjects with less musical experience performed better when they learned with pleasurable music and tested with neutral music (NN > PN > PP > NP). As such, the NP group was best suited for more musically experienced subjects, and the NN group was best suited for the less musically experienced.</p>
</sec>
<sec>
<title>Helsinki inventory of music and affective behaviors covariates</title>
<p>Covariates in the repeated-measures ANCOVAs accounted for individual musical experience and listening differences between the subjects (pooled together) by acting as continuous variables in each analysis. They thus improved the power of the models by removing extraneous influences on variances in accuracy and reaction times. In training, higher accuracies covaried with higher scores of music consumption [
<italic>F</italic>
<sub>(1, 18)</sub>
= 16.60,
<italic>p</italic>
< 0.001], emotional use of music [
<italic>F</italic>
<sub>(1, 18)</sub>
= 3.67,
<italic>p</italic>
= 0.001], music engagement [
<italic>F</italic>
<sub>(1, 18)</sub>
= 6.78,
<italic>p</italic>
< 0.0005], and pleasurable music arousal ratings [
<italic>F</italic>
<sub>(1, 18)</sub>
= 12.87,
<italic>p</italic>
< 0.005]. Music importance [
<italic>F</italic>
<sub>(1, 18)</sub>
= 12.65,
<italic>p</italic>
< 0.005], background use of music [
<italic>F</italic>
<sub>(1, 18)</sub>
= 7.16,
<italic>p</italic>
< 0.05], cognitive use of music [
<italic>F</italic>
<sub>(1, 18)</sub>
= 5.12,
<italic>p</italic>
< 0.05], music distractibility [
<italic>F</italic>
<sub>(1, 18)</sub>
= 22.56,
<italic>p</italic>
< 0.0005], and passive listening [
<italic>F</italic>
<sub>(1, 18)</sub>
= 10.83,
<italic>p</italic>
< 0.005], on the other hand, were negatively related to training accuracy (Figure
<xref ref-type="fig" rid="F8">8A</xref>
).</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption>
<p>
<bold>Covariate relationships on training and test accuracy and reaction times.</bold>
Factors from the Helsinki Inventory of Music and Affective Behaviors (HIMAB) and the listening test significantly covaried with probabilistic selection task performance.
<bold>(A)</bold>
Training accuracy.
<bold>(B)</bold>
Training reaction times.
<bold>(C)</bold>
Test accuracy.
<bold>(D)</bold>
Test reaction times. Numerical values show the slopes of the covariations, and colors represent the directions and significance levels of the effects.</p>
</caption>
<graphic xlink:href="fpsyg-04-00541-g0008"></graphic>
</fig>
<p>Slower training reaction times (Figure
<xref ref-type="fig" rid="F8">8B</xref>
) covaried with higher scores of music importance [
<italic>F</italic>
<sub>(1, 18)</sub>
= 128.98,
<italic>p</italic>
< 0.0001], cognitive use of music [
<italic>F</italic>
<sub>(1, 18)</sub>
= 24.15,
<italic>p</italic>
= 0.0001], music distractibility [
<italic>F</italic>
<sub>(1, 18)</sub>
= 26.14,
<italic>p</italic>
< 0.0001], neutral music familiarity ratings [
<italic>F</italic>
<sub>(1, 18)</sub>
= 30.50,
<italic>p</italic>
< 0.0001], and neutral music arousal ratings [
<italic>F</italic>
<sub>(1, 18)</sub>
= 76.55,
<italic>p</italic>
< 0.0001]. Training reaction times tended to accelerate as playing years [
<italic>F</italic>
<sub>(1,10)</sub>
= 42.85,
<italic>p</italic>
< 0.0001], music consumption [
<italic>F</italic>
<sub>(1, 18)</sub>
= 108.21,
<italic>p</italic>
< 0.0001], active listening [
<italic>F</italic>
<sub>(1, 18)</sub>
= 67.16,
<italic>p</italic>
< 0.0001], pleasurable music familiarity ratings [
<italic>F</italic>
<sub>(1, 18)</sub>
= 33.57,
<italic>p</italic>
< 0.0001], pleasurable music arousal ratings [
<italic>F</italic>
<sub>(1, 18)</sub>
= 39.77,
<italic>p</italic>
< 0.0001], and neutral music pleasantness ratings [
<italic>F</italic>
<sub>(1, 18)</sub>
= 45.28,
<italic>p</italic>
< 0.0001] increased.</p>
<p>In the test phase, accuracy (Figure
<xref ref-type="fig" rid="F8">8C</xref>
) was positively related to neutral music familiarity ratings [
<italic>F</italic>
<sub>(1, 21)</sub>
= 7.65,
<italic>p</italic>
= 0.01] and negatively related to background use of music [
<italic>F</italic>
<sub>(1, 21)</sub>
= 6.73,
<italic>p</italic>
< 0.05] and pleasurable music familiarity ratings [
<italic>F</italic>
<sub>(1, 21)</sub>
= 7.26,
<italic>p</italic>
= 0.01]. Test reaction times (Figure
<xref ref-type="fig" rid="F8">8D</xref>
) were generally faster when music consumption [
<italic>F</italic>
<sub>(1, 21)</sub>
= 4.67,
<italic>p</italic>
< 0.05], active listening [
<italic>F</italic>
<sub>(1, 21)</sub>
= 14.98,
<italic>p</italic>
< 0.001], and pleasurable music pleasantness ratings [
<italic>F</italic>
<sub>(1, 21)</sub>
= 2.17,
<italic>p</italic>
< 0.05] increased. Subjects with higher music importance [
<italic>F</italic>
<sub>(1, 21)</sub>
= 5.62,
<italic>p</italic>
< 0.05], background use of music [
<italic>F</italic>
<sub>(1, 21)</sub>
= 4.60,
<italic>p</italic>
< 0.05], and neutral music familiarity ratings [
<italic>F</italic>
<sub>(1, 21)</sub>
= 7.36,
<italic>p</italic>
= 0.01], alternatively, tended to respond slower during the test.</p>
</sec>
<sec>
<title>Helsinki inventory of music and affective behaviors regressors</title>
<p>We performed multiple linear regression analyses on accuracy and reaction times to further explore the influences of the musical experience and listening variables on task performance. In training, this analysis revealed significant positive correlations between accuracy and music consumption (β = 0.11,
<italic>p</italic>
< 0.0001), emotional use of music (β = 0.12,
<italic>p</italic>
< 0.0001), music engagement (β = 0.08,
<italic>p</italic>
< 0.01), pleasurable music arousal ratings (β = 0.07,
<italic>p</italic>
= 0.01), and neutral music pleasantness ratings (β = 0.15,
<italic>p</italic>
< 0.0001). Training accuracy generally decreased when background use of music (β = −0.26,
<italic>p</italic>
< 0.0001), music distractibility (β = −0.22,
<italic>p</italic>
< 0.0001), active listening (β = −0.15,
<italic>p</italic>
< 0.0001), passive listening (β = −0.06,
<italic>p</italic>
< 0.05), and neutral music arousal ratings (β = −0.08,
<italic>p</italic>
< 0.01) increased (Figure
<xref ref-type="fig" rid="F9">9A</xref>
).</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption>
<p>
<bold>Multiple regression correlations on training and test accuracy and reaction times.</bold>
Multiple linear regressions revealed many individual factors significantly correlated to probabilistic selection task performance.
<bold>(A)</bold>
Training accuracy.
<bold>(B)</bold>
Training reaction times.
<bold>(C)</bold>
Test accuracy.
<bold>(D)</bold>
Test reaction times. Numerical values show the slopes of the regressions, and colors represent the directions and significance levels of the effects.</p>
</caption>
<graphic xlink:href="fpsyg-04-00541-g0009"></graphic>
</fig>
<p>Training reaction times were commonly slower for subjects with higher scores of music importance (β = 0.20,
<italic>p</italic>
< 0.0001), background use of music (β = 0.24,
<italic>p</italic>
< 0.0001), cognitive use of music (β = 0.11,
<italic>p</italic>
= 0.0001), music distractibility (β = 0.17,
<italic>p</italic>
< 0.0001), and neutral music arousal ratings (β = 0.14,
<italic>p</italic>
< 0.0001). Subjects with more playing years (β = −0.17,
<italic>p</italic>
< 0.0001), greater music consumption (β = −0.12,
<italic>p</italic>
< 0.0001), active listening (β = −0.22,
<italic>p</italic>
< 0.0001), pleasurable music familiarity ratings (β = −0.10,
<italic>p</italic>
< 0.05), and neutral music pleasantness ratings (β = −0.21,
<italic>p</italic>
< 0.0001) tended to respond faster during training (Figure
<xref ref-type="fig" rid="F9">9B</xref>
).</p>
<p>Test accuracy was positively correlated with neutral music pleasantness ratings (β = 0.21,
<italic>p</italic>
< 0.05) (Figure
<xref ref-type="fig" rid="F9">9C</xref>
). Slower test reaction times corresponded to higher background use of music scores (β = 0.31,
<italic>p</italic>
= 0.05) and lower pleasurable music familiarity ratings (β = −0.33,
<italic>p</italic>
< 0.05), such that subjects who were more likely to use background music responded slower while subjects who found the pleasurable music more familiar responded faster (Figure
<xref ref-type="fig" rid="F9">9D</xref>
).</p>
</sec>
<sec>
<title>Helsinki inventory of music and affective behaviors musicianship comparisons</title>
<p>To see how individual listening behaviors varied across different musical backgrounds, we compared the standardized scores (from 0 to 1) of musicians, amateur musicians, and non-musicians on each of the HIMAB variables (except for those directly related to musical experience) with independent samples
<italic>t</italic>
-tests. For cognitive use of music, non-musicians (mean = 0.40 ± 0.16) scored significantly lower than both musicians [mean = 0.55 ± 0.16;
<italic>t</italic>
<sub>(49)</sub>
= 3.40,
<italic>p</italic>
< 0.005] and amateur musicians [mean = 0.53 ± 0.16;
<italic>t</italic>
<sub>(48)</sub>
= 2.84,
<italic>p</italic>
< 0.01], indicating less cognitive motivation for listening to music (N-M < AM, M). In terms of music engagement, non-musicians (mean = 0.47 ± 0.05) again scored significantly lower than musicians [mean = 0.61 ± 0.05;
<italic>t</italic>
<sub>(49)</sub>
= 2.06,
<italic>p</italic>
< 0.05] and amateur musicians [mean = 0.73 ± 0.23;
<italic>t</italic>
<sub>(48)</sub>
= 3.77,
<italic>p</italic>
< 0.0005; N-M < AM, M], consistent with previous findings (Kantor-Martynuska and Fajkowska, in preparation). Finally, musicians (mean = 0.97 ± 0.06) considered music significantly more important in their daily lives than non-musicians did [mean = 0.83 ± 0.18;
<italic>t</italic>
<sub>(49)</sub>
= 3.45,
<italic>p</italic>
< 0.005; M > N-M]. No other group differences were significant (all
<italic>p</italic>
s > 0.06).</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Musical pleasure is closely linked to the dopaminergic reward system (Blood et al.,
<xref ref-type="bibr" rid="B8">1999</xref>
; Blood and Zatorre,
<xref ref-type="bibr" rid="B7">2001</xref>
; Menon and Levitin,
<xref ref-type="bibr" rid="B44">2005</xref>
; Salimpoor et al.,
<xref ref-type="bibr" rid="B59">2011</xref>
,
<xref ref-type="bibr" rid="B61">2013</xref>
), but the practical implications of this relationship have not yet been explored. Nonetheless, the rewarding aspects of music are likely due at least in part to musical reward prediction errors (Meyer,
<xref ref-type="bibr" rid="B45">1956</xref>
; Sloboda,
<xref ref-type="bibr" rid="B66">1991</xref>
; Huron,
<xref ref-type="bibr" rid="B34">2006</xref>
; Vuust and Kringelbach,
<xref ref-type="bibr" rid="B72">2010</xref>
; Salimpoor et al.,
<xref ref-type="bibr" rid="B61">2013</xref>
), and these could have considerable influences on cognitive performance during music listening. We investigated the capacity of subjective musical pleasure to influence reinforcement learning via non-pharmacological dopamine elicitation. Seventy-three subjects of varied musical backgrounds chose pleasurable and neutral musical excerpts from an experimenter-chosen valence-, energy-, and tension-controlled database and reported their musical experiences and listening patterns in the HIMAB (Table
<xref ref-type="table" rid="T1">1</xref>
). In the PS task, they then learned to distinguish between frequently and infrequently rewarded stimuli in three image pairs, and ultimately generalized these relative reward contingencies to recombined pairs of the same stimuli during a test phase (Figure
<xref ref-type="fig" rid="F2">2</xref>
). Pseudo-random group assignments determined whether subjects heard pleasurable music or neutral music during the training and test phases of the PS task (Table
<xref ref-type="table" rid="T1">1</xref>
); these group assignments were termed “PN,” “NP,” “PP,” and “NN.” We found that musical pleasure affected task performance in various ways that were consistent with enhanced dopamine transmission, and that these influences depended on the musical backgrounds and listening patterns of the subjects.</p>
<sec>
<title>Learning stimulus-outcome relationships</title>
<p>Subjects began the PS task on equal footing even if they had different musical backgrounds or listened to different music. Since accuracy and working memory recruitment as measured by win-stay/lose-switch behavior did not differ across musical experiences or musical conditions, we can assume that the different subject groups were equally naïve to the paradigm and that the effects we observed were due to learning and generalizing during the experiment and not to
<italic>a priori</italic>
group differences. Notably, while there was already an effect of musical experiences on music-mediated reaction times in the beginning of training, the direction of this effect reversed from training to testing. As such, this cannot be interpreted as an
<italic>a priori</italic>
bias, but as an immediate influence of the experimental manipulation. Throughout the training phase, the 73 subjects learned to choose the more frequently rewarded images, and their accuracy and reaction times improved from the beginning to the end of training and from the hardest to the easiest training stimulus pair.</p>
<p>Overall, pleasurable music generally accelerated reaction times during learning. These reaction time differences cannot be attributed to arousal because arousal was treated as a covariate and because training reaction times failed to relate to the subjective arousal ratings of the pleasurable music. Moreover, responses were slower for more arousing neutral music (Figures
<xref ref-type="fig" rid="F8">8B</xref>
,
<xref ref-type="fig" rid="F9">9B</xref>
), and subjects typically considered the pleasurable music more arousing (Table
<xref ref-type="table" rid="T3">3</xref>
). Instead, previous research with reinforcement learning has shown that faster reaction times are associated with greater striatal dopamine efficacy (Caldú et al.,
<xref ref-type="bibr" rid="B11">2007</xref>
; Frank et al.,
<xref ref-type="bibr" rid="B29">2007b</xref>
; Niv et al.,
<xref ref-type="bibr" rid="B48">2007</xref>
; Frank et al.,
<xref ref-type="bibr" rid="B27">2009</xref>
), suggesting that this is a dopaminergic effect. Subjective pleasure can have considerable influences on cognitive performance such that even slight mood changes can alter reinforcement learning, and this effect is thought to rely on enhanced dopamine transmission (Carpenter et al.,
<xref ref-type="bibr" rid="B13">2013</xref>
; for a review see Ashby et al.,
<xref ref-type="bibr" rid="B1">1999</xref>
). Consistent with this, we found that higher subjective pleasantness ratings of the neutral music corresponded to faster reaction times (Figures
<xref ref-type="fig" rid="F8">8B</xref>
,
<xref ref-type="fig" rid="F9">9B</xref>
), demonstrating that even within the neutral musical condition, responses accelerated when subjects enjoyed the music more.</p>
<p>Musical pleasure also influenced training accuracy. Although this effect was not directly evident when comparing pleasurable and neutral music overall, an interaction between years of musical experience and the music heard during learning revealed that more musically experienced subjects performed more accurately with neutral music and less accurately with pleasurable music (Figure
<xref ref-type="fig" rid="F5">5</xref>
).</p>
<p>Together, these results show that musical pleasure enhanced approach behavior during the training phase of the PS task. Importantly, this effect was driven by subjects with little to no musical experience, since subjects with more musical experience instead tended to perform better when listening to neutral music during training and not pleasurable music (Figures
<xref ref-type="fig" rid="F5">5</xref>
,
<xref ref-type="fig" rid="F6">6</xref>
). These dissociable musical background effects underline the magnitude of the influence that musical pleasure had on subjects with little musical experience, who also reported considering music less important and approaching music less cognitively than other subjects did. Both of these correlations suggest that these subjects devoted less attention to the music than others, and low scores on both of these factors—as well as years spent playing music—were associated with faster training reaction times (Figures
<xref ref-type="fig" rid="F8">8B</xref>
,
<xref ref-type="fig" rid="F9">9B</xref>
). However, this does not mean that musically inexperienced subjects were unmoved by the music; on the contrary, with less analytical approaches these subjects were probably more emotionally affected (Istók et al.,
<xref ref-type="bibr" rid="B36">2009</xref>
; Müller et al.,
<xref ref-type="bibr" rid="B47">2010</xref>
). In fact, non-musicians rated their pleasurable music higher than musicians did. Thus, these subjects likely devoted fewer cognitive resources to the pleasurable music but enjoyed it more than others, allowing them to attend to learning and simultaneously benefit from the affective, possibly dopaminergic, effects of pleasurable music. Consistent with this interpretation, low cognitive use of music and music importance scores—but also high music engagement and emotional use of music scores—also corresponded to better training accuracy (Figures
<xref ref-type="fig" rid="F8">8B</xref>
,
<xref ref-type="fig" rid="F9">9B</xref>
), which likely reflects musically inexperienced subjects analyzing the music less but still engaging with it emotionally and enjoying it more. Accordingly, these subjects performed better when they enjoyed the music more (Figure
<xref ref-type="fig" rid="F5">5</xref>
).</p>
<p>Reward prediction errors offer a potential mechanism for these findings. With less musical experience and analytical listening than others (Istók et al.,
<xref ref-type="bibr" rid="B36">2009</xref>
), musically inexperienced subjects could be less able to develop reasonable top-down, explicit expectations about the music and thus more susceptible to musically elicited prediction errors (Huron,
<xref ref-type="bibr" rid="B34">2006</xref>
; Müller et al.,
<xref ref-type="bibr" rid="B47">2010</xref>
; Vuust and Kringelbach,
<xref ref-type="bibr" rid="B72">2010</xref>
). These greater reward prediction errors would amplify the perceived value of the rewarded stimuli and the music for these subjects (Montague et al.,
<xref ref-type="bibr" rid="B46">1996</xref>
; Schultz,
<xref ref-type="bibr" rid="B63">2002</xref>
), which would in turn promote approach behaviors (Frank et al.,
<xref ref-type="bibr" rid="B30">2004</xref>
; Caldú et al.,
<xref ref-type="bibr" rid="B11">2007</xref>
; Frank et al.,
<xref ref-type="bibr" rid="B29">2007b</xref>
). Although there are other possible explanations for this result, this interpretation is consistent with recent evidence linking music enjoyment to the reward system and prediction errors (Menon and Levitin,
<xref ref-type="bibr" rid="B44">2005</xref>
; Salimpoor et al.,
<xref ref-type="bibr" rid="B59">2011</xref>
,
<xref ref-type="bibr" rid="B61">2013</xref>
).</p>
<p>More musically experienced subjects exhibited opposite reaction time and accuracy patterns. Both musicians and amateur musicians in this study rated music as more important and used music more cognitively than non-musicians, and when considering that musicianship is associated with various advantages in high-level automatic music processing (Koelsch et al.,
<xref ref-type="bibr" rid="B38">1999</xref>
; Tervaniemi et al.,
<xref ref-type="bibr" rid="B70">2005</xref>
; Oechslin et al.,
<xref ref-type="bibr" rid="B50">2012</xref>
) as well as more analytical listening strategies (Istók et al.,
<xref ref-type="bibr" rid="B36">2009</xref>
; Müller et al.,
<xref ref-type="bibr" rid="B47">2010</xref>
), we can infer that they probably devoted more cognitive resources to the music during the PS task. Musicians in this study also gave lower pleasantness ratings for their pleasurable music than non-musicians did, demonstrating that they did not enjoy the musical stimuli (which were clips from film soundtracks) as much as other subjects did, even though they also reported engaging with music more. Interpreting these results from the perspective of more musical experience and more analytical listening, musicians can be said to have more “critical ears” than other listeners. More musically experienced subjects might have therefore been less emotionally affected by their pleasurable music, chosen as it was from a limited amount of pieces predetermined by the experimenters. Nonetheless, with more cognitive listening strategies, these subjects might have been more inclined to analyze music the music they preferred, even if the margin of preference was minimal. This could explain why more musically experienced subjects performed better with neutral music than with pleasurable music (Figures
<xref ref-type="fig" rid="F5">5</xref>
,
<xref ref-type="fig" rid="F6">6</xref>
), and why cognitive use of music and music importance were both more prevalent among musicians and simultaneously related to decreased training accuracy (Figures
<xref ref-type="fig" rid="F8">8A</xref>
,
<xref ref-type="fig" rid="F9">9A</xref>
).</p>
</sec>
<sec>
<title>Generalizing about probabilistic rewards</title>
<p>By the end of training, the 73 subjects included in the analysis had learned to choose the more frequently rewarded stimuli. They thus entered the test phase with sufficient task knowledge, albeit with differences demonstrating that the musical manipulation was already influencing task performance. After 54 presentations of each training pair, subjects transferred what they had learned to a test phase with no feedback. The test included all possible combinations of the six training images: the three training pairs plus 12 novel combinations.</p>
<p>Despite responding faster with neutral music in the training phase, more musically experienced subjects responded to test stimuli faster when they listened to pleasurable music. Specifically, musically experienced subjects who listened to neutral music during training (and thus responded faster in that phase) exhibited slower reaction times during the test phase if they then listened to neutral music and quicker reaction times if they then listened to pleasurable music. The less musically experienced subjects, alternatively, responded to neutral music with slower reaction times during training but then faster reaction times during testing (Figure
<xref ref-type="fig" rid="F7">7</xref>
). A trend effect of musical condition suggesting faster reactions during pleasurable music implied that more musically experienced subjects drove this effect this time in spite of a sample that was skewed toward the less experienced.</p>
<p>As discussed above, the HIMAB results suggest more musically experienced subjects were more likely to focus on the music they enjoyed during the task. This was detrimental to their training performance during pleasurable music listening, but the same behavior could have had the opposite effect during the test. While learning about relative reward contingencies involves predictions, prediction errors, valuation, salience attribution, and working memory processes (Schultz,
<xref ref-type="bibr" rid="B63">2002</xref>
; Jocham et al.,
<xref ref-type="bibr" rid="B37">2011</xref>
; Collins and Frank,
<xref ref-type="bibr" rid="B19">2013</xref>
), performance on a test without feedback depends more on motivation and the management of previously learned values (Robinson and Berridge,
<xref ref-type="bibr" rid="B56">1993</xref>
; Jocham et al.,
<xref ref-type="bibr" rid="B37">2011</xref>
; Shiner et al.,
<xref ref-type="bibr" rid="B65">2012</xref>
). In other words, expressing reinforced behaviors is considerably less cognitive than acquiring them (Doll et al.,
<xref ref-type="bibr" rid="B23">2011</xref>
). As such, devoting cognitive resources to music would not detract from performance on the mostly non-cognitive test in the same way that it detracted from training performance. This could explain why the test music had a greater influence on musically experienced subjects than the training music, and why the beneficial effect of pleasurable music on musically inexperienced subjects during training seemed to disappear when these subjects transferred their task knowledge to the test phase. With musically experienced subjects less cognitively engaged in the PS task during testing and thus suddenly more susceptible to the musical background, the behaviors of less musically experienced subjects were likely overshadowed by this dramatic shift. The contrasting results for more and less musically experienced subjects, then, could once again reflect their more and less cognitive listening strategies, respectively.</p>
<p>In many ways, the behaviors of experienced music listeners in this study resembled those of experienced musicians. Results for music playing years and weekly music listening hours paralleled each other throughout both training and testing, even though the former measured music-making and the latter only music listening. Since this experiment involved listening to but not making, reading, or writing music, subjects who regularly listened to a lot of music behaved similarly to those with extensive musical training when they performed a task with a musical background (cf. Bigand and Poulin-Charronnat,
<xref ref-type="bibr" rid="B6">2006</xref>
). This can also be seen in terms of individual music consumption, for which higher scores corresponded to better training accuracy (Figures
<xref ref-type="fig" rid="F8">8A</xref>
,
<xref ref-type="fig" rid="F9">9A</xref>
) and faster reaction times during training (Figures
<xref ref-type="fig" rid="F8">8B</xref>
,
<xref ref-type="fig" rid="F9">9B</xref>
) and testing (Figure
<xref ref-type="fig" rid="F8">8D</xref>
).</p>
<p>Although several of the present findings are consistent with enhanced approach behavior during pleasurable music listening, we observed a “NoGo” bias in our data throughout the test. In addition, subjects who never listened to pleasurable music during the PS task (the NN group) performed the worst at avoiding the most frequently punished stimulus, while those who listened to pleasurable music once (NP and PN) performed the best (Figure
<xref ref-type="fig" rid="F4">4</xref>
). Although we expected to find an approach bias due to pleasurable music, this finding is not unprecedented in healthy subjects (Jocham et al.,
<xref ref-type="bibr" rid="B37">2011</xref>
). The presence of music might have distracted subjects from the task at hand to the extent that it was actually somewhat aversive, which could account for the avoidance bias we observed. At the same time, the pleasurable music condition would have been less aversive than the neutral music condition, and this can explain the relative approach effects we found with pleasurable music compared to neutral music.</p>
<p>Alternatively, subjects distracted by music could have been less reliable than normal in their choices of A over B during learning, which would result in more than the typical amount of punishments after choosing B and thus lead to more of an avoidance bias than usual. As such, our data could exhibit an overall avoidance bias due to music in general, with the differential group effects merely reflecting the overall group effects that only reached significance in the test trials for which the avoidance-biased subjects were especially prepared. Again assuming that subjects attended more to music they preferred, this would imply that they were more distracted by pleasurable music than by neutral music, and thus more likely to receive negative feedback when learning with pleasurable music. Listening to pleasurable music for the second time in a row, however, would not have been equally engaging. Even so, the NP group was best at avoiding B, which could simply reflect the aforementioned advantage enjoyed by musically experienced subjects in this group. Indeed, this effect represents a subset of the test phase, during which the behavioral shift in musically experienced subjects (especially in the NP group) had a profound influence.</p>
</sec>
<sec>
<title>Individual factors</title>
<p>We assessed the relationships between performance in the PS paradigm and musical experiences, different uses of music (Chamorro-Premuzic and Furnham,
<xref ref-type="bibr" rid="B14">2007</xref>
), music consumption (Chamorro-Premuzic et al.,
<xref ref-type="bibr" rid="B15">2012</xref>
), music-directed attention (Kantor-Martynuska and Fajkowska, in preparation), music importance, active and passive listening frequencies, and subjective ratings of the pleasurable and neutral music in the study.</p>
<p>These factors greatly shaped learning, with musical experience, uses of music, music consumption, music-directed attention, music importance, listening frequencies, and subjective ratings from the listening test all influencing training accuracy or reaction times (Figures
<xref ref-type="fig" rid="F8">8A,B</xref>
,
<xref ref-type="fig" rid="F9">9A,B</xref>
) and background use of music, music consumption, music importance, active listening, and subjective ratings of the experimental music affecting test performance (Figures
<xref ref-type="fig" rid="F8">8C,D</xref>
<xref ref-type="fig" rid="F9">9C,D</xref>
). As discussed above, higher music importance and cognitive use of music scores were associated with both worse training accuracy and slower training reaction times, more emotional music listening corresponded to better training accuracy, more years of playing music were associated with faster training reaction times, more music engagement was correlated with better training accuracy, and greater music engagement scores corresponded to faster training responses. Notably, subjects who devoted more time to active music listening without any distractions tended to respond less accurately but more rapidly during training, suggesting that they probably tried to listen to the music actively during the PS task and devoted fewer cognitive resources to the PS task, perhaps responding impulsively due to lack of focus. Subjects who spent more time passively listening to music as one of many tasks also tended to be less accurate during learning, and those who used music for background purposes more were both less accurate and slower to respond. One possible interpretation for these counter-intuitive findings is that individuals who normally listen to music while doing non-cognitive tasks might have been distracted by the cognitive nature of the PS task's training phase. Another explanation could be that subjects who were more likely to listen to music passively and in the background, as opposed to actively and in the foreground, are also less likely to become invested in music and respond to it emotionally, using it instead simply to fill what would otherwise be silence. Neither of these interpretations conflicts with the finding that subjects with greater music consumption scores were both more accurate and quicker to respond, most likely due to their greater exposure to music. Finally, music distractibility, which measures the extent to which music diverts attention from a primary focus such as the PS task, also corresponded to decreased training accuracy and slower training reaction times.</p>
<p>Subjective ratings of the music played during the PS task also correlated to task performance. As discussed earlier, higher subjective pleasantness ratings of the neutral music correlated to faster reaction times during training, probably because the neutral music condition for these subjects was not as aversive as it was for others. Likewise, these ratings also increased with greater training accuracy. Higher arousal ratings of the neutral music were correlated with decreased accuracy and slower reaction times, whereas higher arousal ratings of the pleasurable music correlated to greater accuracy. Once again, the higher ratings within each musical condition could have exaggerated the aversive and pleasurable effects of that condition, respectively. Greater familiarity ratings of the pleasurable music quickened training reaction times, possibly because this more predictable music was less distracting, and/or because more familiar pleasurable music is likely to be more pleasurable than unfamiliar pleasurable music as evidenced by behavioral and fMRI findings (Pereira et al.,
<xref ref-type="bibr" rid="B53">2011</xref>
). Altogether, these subjective differences altered task performance according to the affective experience of the listener, but as discussed above that experience depended largely on musical background.</p>
<p>During the test, several HIMAB results mirrored those of training: greater music consumption correlated to faster responses, more active listening again corresponded to faster reaction times, greater music importance was associated with slower reaction times, and higher background use of music scores related to slower responses and worse accuracy, just as in the training phase. This last result, regarding background use of music, is consistent with the aforementioned interpretation that subjects who listen to background music are less likely to respond to it emotionally. However, this finding is not consistent with the interpretation that the cognitive nature of the PS task distracted these subjects, since the test phase of this task is considerably less cognitive than the training phase (Doll et al.,
<xref ref-type="bibr" rid="B23">2011</xref>
). Consequently, it seems that subjects more likely to use music for background purposes were less likely to become emotionally invested in the music during the PS task.</p>
<p>Also as in training, higher familiarity ratings of the pleasurable music corresponded to faster test responses. Likewise, higher familiarity ratings of the neutral music were associated with slower test responses as well as greater accuracy, possibly because more familiar neutral music was more enjoyable to some but more tedious to others. Finally, higher pleasantness ratings of the neutral music were correlated with greater test accuracy, consistent with the same result during training.</p>
<p>Overall, these results imply that learning strategies differ greatly across individuals (cf. Fabry and Giesler,
<xref ref-type="bibr" rid="B26">2012</xref>
) and depend on several factors, whereas generalizing about previously learned information depends more on the context of the test than on background factors. This finding is consistent with our observations of different listening strategies between more and less musically experienced subjects that had greater influence in the more cognitive learning phase of task at hand. Put another way, each individual's approach to learning depended largely on his or her musical background, but expressing previously learned knowledge was a less cognitive task that thus allowed for more of an immediate emotional effect in even the more analytical music listeners.</p>
</sec>
<sec>
<title>Limitations and conclusion</title>
<p>The present study represents a first step in bringing together musical pleasure and reinforcement learning to explore their common roots in the cognitive neuroscience of reward. Using a reinforcement learning task to study the rewarding aspects of music listening, we found that pleasurable music was able to influence task performance in the expected way: that is, in a way consistent with the actions of a dopamine agonist. Examining inter-individual differences, we revealed complex effects of musical pleasure on reinforcement learning that depended on the musical backgrounds and listening behaviors of the subjects.</p>
<p>Listening to pleasurable music activates areas of brain implicated in emotion and reward (Blood et al.,
<xref ref-type="bibr" rid="B8">1999</xref>
; Blood and Zatorre,
<xref ref-type="bibr" rid="B7">2001</xref>
; Menon and Levitin,
<xref ref-type="bibr" rid="B44">2005</xref>
; Salimpoor et al.,
<xref ref-type="bibr" rid="B59">2011</xref>
,
<xref ref-type="bibr" rid="B61">2013</xref>
). Our findings suggest that musical pleasure acted on the dopaminergic reward system because it influenced performance in a task dependent on dopamine transmission (Frank et al.,
<xref ref-type="bibr" rid="B30">2004</xref>
), but we did not directly measure dopamine transmission in any way. Other neurotransmitters and systems were likely involved, and the mesocorticolimbic effects of musical pleasure may in fact be insufficient to influence reinforcement learning. Instead, music could alter task performance via attentional, working memory, or sensorimotor influences. In addition to direct measurements of dopamine transmission, neuroimaging the temporal and spatial dynamics of musical pleasure and reinforcement learning would elucidate their interactions as well as the various contributions of brain areas involved in attention, memory, and motion. Future research would also benefit from direct measures of attention, working memory, and sensorimotor integration during music listening and/or task performance, as well as music listening information that reflects the subjects' real-time behaviors during the experimental task. Objective physiological measures of pleasure and arousal, shown to correlate to one another (Salimpoor et al.,
<xref ref-type="bibr" rid="B60">2009</xref>
), would also be an improvement on the subjective ratings we used in the present study. Finally, the learning effects we observed could reflect group differences in intelligence or learning aptitude, which could be controlled for in subsequent investigations.</p>
<p>Selecting stimuli for neuroaesthetics research is necessarily problematic. When experimenters choose, participants are prone to disagree with their judgments and enjoyment will vary across individuals. When participants choose, stimuli are likely to differ tremendously and skew the sample (e.g., toward faster, happier music instead of a balanced range). Salimpoor and colleagues (2009, 2011) introduced a method wherein each participant's favorite music served as another participant's neutral music, and we adapted this technique by pre-selecting 14 instrumental pieces of similar valence, energy, and tension from which each subject could choose. We ensured that subjects enjoyed their pleasurable music but not their neutral music, and we used familiarity, pleasantness, and arousal ratings as covariates in our analyses. However, even this combination of experimenter-selected and participant-selected methods limits the range of enjoyment our subjects felt in exchange for a more controlled stimulus set.</p>
<p>Most experiments that investigate the differences between learning and testing ignore training response times (Jocham et al.,
<xref ref-type="bibr" rid="B37">2011</xref>
; Shiner et al.,
<xref ref-type="bibr" rid="B65">2012</xref>
). In the present study, musical pleasure differentially influenced reaction times according to musical experience during learning, which would not have been apparent by analyzing accuracy alone. Moreover, these effects were shaped by several factors that varied across individuals. Subjectivity seems to have profound effects on dopamine transmission, implying that the discordant conclusions of previous reinforcement learning research could arise from complex interactions between innate predispositions, neuroplastic changes, and experimental manipulations underlying dopamine efficacy. Though there is a growing body of research on the ability of dopaminergic agonists/antagonists to influence instrumental learning in neuropsychiatric disorders (e.g., Frank et al.,
<xref ref-type="bibr" rid="B30">2004</xref>
; Chase et al.,
<xref ref-type="bibr" rid="B17">2010</xref>
; Worbe et al.,
<xref ref-type="bibr" rid="B73">2011</xref>
; Grob et al.,
<xref ref-type="bibr" rid="B32">2012</xref>
), no study that we know of has investigated the relationships between individual factors and dopaminergic manipulations in healthy subjects. We found subjective modulations of music's effects, signifying that our enjoyment of music depends a great deal on the amount of music we listen to, how we listen to it, how we engage with it, our musical experience, and even our reasons for approaching it. Since this is the first study we know of to apply individual musical background and listening factors to background music listening during an experimental task, our interpretations of these results represent only a subset of the possible explanations for these effects. Future research should further investigate the influences of different individual experience and listening behaviors on musical and non-musical tasks, as well as their mechanisms. Nonetheless, these factors all seem to influence the rewarding impact of music, signaling the need for a more subjectivist approach to musical pleasure and reward.</p>
<p>Music is a powerful and universal phenomenon, intensely important and rewarding to many people (Sloboda and Juslin,
<xref ref-type="bibr" rid="B67">2001</xref>
; Dubé and Le Bel,
<xref ref-type="bibr" rid="B24">2003</xref>
). Musical pleasure thus offers an ecological and dynamic approach to investigating reward, while reward itself offers many practical applications for musical pleasure. Bringing these topics together, then, has important implications in education, affect, and therapy. In particular, Parkinson's disease represents a promising avenue for future research since the relationship between this disease and reinforcement learning is very well understood (Frank et al.,
<xref ref-type="bibr" rid="B30">2004</xref>
,
<xref ref-type="bibr" rid="B29">2007b</xref>
; Shiner et al.,
<xref ref-type="bibr" rid="B65">2012</xref>
) and music therapy has already been shown to improve motor and cognitive deficits in Parkinson's disease (Pacchetti et al.,
<xref ref-type="bibr" rid="B52">2000</xref>
). Ultimately, whether or not our findings reflect altered dopamine transmission remains to be seen, but the ability of musical pleasure to influence reward-based decision making speaks to its affective and effective potency.</p>
</sec>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We wish to thank Tommi Makkonen, Teppo Särkämö, Mari Tervaniemi, and Umberto Trivella for their helpful contributions in various stages of this project. The study was financially supported by the Academy of Finland (Center of Excellence program and post-doctoral researcher project number 133673), the University of Helsinki (project number 490083), and the Gyllenberg Foundation.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ashby</surname>
<given-names>F. G.</given-names>
</name>
<name>
<surname>Isen</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Turken</surname>
<given-names>A. U.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>A neuropsychological theory of positive affect and its influence on cognition</article-title>
.
<source>Psychol. Rev</source>
.
<volume>106</volume>
,
<fpage>529</fpage>
<lpage>550</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.106.3.529</pub-id>
<pub-id pub-id-type="pmid">10467897</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Badre</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Mechanisms of hierarchical reinforcement learning in corticostriatal circuits 2: evidence from fMRI</article-title>
.
<source>Cereb. Cortex</source>
<volume>22</volume>
,
<fpage>527</fpage>
<lpage>536</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhr117</pub-id>
<pub-id pub-id-type="pmid">21693491</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bengtsson</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Nagy</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Skare</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Forsman</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Forssberg</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Ullén</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Extensive piano practicing has regionally specific effects on white matter development</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>8</volume>
,
<fpage>1148</fpage>
<lpage>1150</lpage>
<pub-id pub-id-type="doi">10.1038/nn1516</pub-id>
<pub-id pub-id-type="pmid">16116456</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Berridge</surname>
<given-names>K. C.</given-names>
</name>
<name>
<surname>Kringelbach</surname>
<given-names>M. L.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Neuroscience of affect: brain mechanisms of pleasure and displeasure</article-title>
.
<source>Curr. Opin. Neurobiol</source>
.
<volume>23</volume>
,
<fpage>294</fpage>
<lpage>303</lpage>
<pub-id pub-id-type="doi">10.1016/j.conb.2013.01.017</pub-id>
<pub-id pub-id-type="pmid">23375169</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bharucha</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Tonality and expectation</article-title>
, in
<source>Musical Perceptions</source>
, ed
<person-group person-group-type="editor">
<name>
<surname>Aiello</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>213</fpage>
<lpage>239</lpage>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bigand</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Poulin-Charronnat</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Are we “experienced listeners?” A review of the musical capacities that do not depend on formal musical training</article-title>
.
<source>Cognition</source>
<volume>100</volume>
,
<fpage>100</fpage>
<lpage>130</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2005.11.007</pub-id>
<pub-id pub-id-type="pmid">16412412</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blood</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>98</volume>
,
<fpage>11818</fpage>
<lpage>11823</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.191355898</pub-id>
<pub-id pub-id-type="pmid">11573015</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blood</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Bermudez</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>2</volume>
,
<fpage>382</fpage>
<lpage>387</lpage>
<pub-id pub-id-type="doi">10.1038/7299</pub-id>
<pub-id pub-id-type="pmid">10204547</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brattico</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Pearce</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The neuroaesthetics of music</article-title>
.
<source>Psychol. Aesthet. Crea. Arts</source>
<volume>7</volume>
,
<fpage>48</fpage>
<lpage>61</lpage>
<pub-id pub-id-type="doi">10.1037/a0031624</pub-id>
<pub-id pub-id-type="pmid">18207423</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brown</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Tisdelle</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Eickhoff</surname>
<given-names>S. B.</given-names>
</name>
<name>
<surname>Liotti</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Naturalizing aesthetics: brain areas for aesthetic appraisal across sensory modalities</article-title>
.
<source>Neuroimage</source>
<volume>58</volume>
,
<fpage>250</fpage>
<lpage>258</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.06.012</pub-id>
<pub-id pub-id-type="pmid">21699987</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Caldú</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Vendrell</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Bartrés-Faz</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Clemente</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Bargalló</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Jurado</surname>
<given-names>M. A.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2007</year>
).
<article-title>Impact of the COMT Val108/158 Met and DAT genotypes on prefrontal function in healthy subjects</article-title>
.
<source>Neuroimage</source>
<volume>4</volume>
,
<fpage>1437</fpage>
<lpage>1444</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2007.06.021</pub-id>
<pub-id pub-id-type="pmid">17689985</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Caplin</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Dean</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Glimcher</surname>
<given-names>P. W.</given-names>
</name>
<name>
<surname>Rutledge</surname>
<given-names>R. B.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Measuring beliefs and rewards: a neuroeconomic approach</article-title>
.
<source>Q. J. Econ</source>
.
<volume>125</volume>
,
<fpage>923</fpage>
<lpage>960</lpage>
<pub-id pub-id-type="doi">10.1162/qjec.2010.125.3.923</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carpenter</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Peters</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Västfjäll</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Isen</surname>
<given-names>A. M.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Positive feelings facilitate working memory and complex decision making among older adults</article-title>
.
<source>Cogn. Emot</source>
.
<volume>27</volume>
,
<fpage>184</fpage>
<lpage>192</lpage>
<pub-id pub-id-type="doi">10.1080/02699931.2012.698251</pub-id>
<pub-id pub-id-type="pmid">22764739</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chamorro-Premuzic</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Furnham</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Personality and music: can traits explain how people use music in everyday life?</article-title>
<source>Br. J. Psychol</source>
.
<volume>98</volume>
,
<fpage>175</fpage>
<lpage>185</lpage>
<pub-id pub-id-type="doi">10.1348/000712606X111177</pub-id>
<pub-id pub-id-type="pmid">17456267</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chamorro-Premuzic</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Swami</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Cermakova</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Individual differences in music consumption are predicted by uses of music and age rather than emotional intelligence, neuroticism, extraversion or openness</article-title>
.
<source>Psychol. Music</source>
<volume>40</volume>
,
<fpage>285</fpage>
<lpage>300</lpage>
<pub-id pub-id-type="doi">10.1177/0305735610381591</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chapin</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Jantzen</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kelso</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Steinberg</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Large</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Dynamic emotional and neural responses to music depend on performance expression and listener experience</article-title>
.
<source>PLoS ONE</source>
<volume>5</volume>
:
<fpage>e13812</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0013812</pub-id>
<pub-id pub-id-type="pmid">21179549</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chase</surname>
<given-names>H. W.</given-names>
</name>
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Michael</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bullmore</surname>
<given-names>E. T.</given-names>
</name>
<name>
<surname>Sahakian</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Robbins</surname>
<given-names>T. W.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Approach and avoidance learning in patients with major depression and healthy controls: relation to anhedonia</article-title>
.
<source>Psychol. Med</source>
.
<volume>40</volume>
,
<fpage>433</fpage>
<lpage>440</lpage>
<pub-id pub-id-type="doi">10.1017/S0033291709990468</pub-id>
<pub-id pub-id-type="pmid">19607754</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Penhune</surname>
<given-names>V. B.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Moving on time: brain network for auditory-motor synchronization is modulated by rhythm complexity and musical training</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>20</volume>
,
<fpage>226</fpage>
<lpage>239</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2008.20018</pub-id>
<pub-id pub-id-type="pmid">18275331</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Collins</surname>
<given-names>A. G. E.</given-names>
</name>
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Cognitive control over learning: creating, clustering and generalizing task-set structure</article-title>
.
<source>Psychol. Rev</source>
.
<volume>120</volume>
,
<fpage>190</fpage>
<lpage>229</lpage>
<pub-id pub-id-type="doi">10.1037/a0030852</pub-id>
<pub-id pub-id-type="pmid">23356780</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>D'Ardenne</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>McClure</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Nystrom</surname>
<given-names>L. E.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>BOLD responses reflecting dopaminergic signals in the human ventral tegmental area</article-title>
.
<source>Science</source>
<volume>319</volume>
,
<fpage>1264</fpage>
<lpage>1267</lpage>
<pub-id pub-id-type="doi">10.1126/science.1150605</pub-id>
<pub-id pub-id-type="pmid">18309087</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Daw</surname>
<given-names>N. D.</given-names>
</name>
<name>
<surname>Doya</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>The computational neurobiology of learning and reward</article-title>
.
<source>Curr. Opin. Neurobiol</source>
.
<volume>16</volume>
,
<fpage>199</fpage>
<lpage>204</lpage>
<pub-id pub-id-type="doi">10.1016/j.conb.2006.03.006</pub-id>
<pub-id pub-id-type="pmid">16563737</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dellacherie</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Roy</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hugueville</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Samson</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>The effect of musical experience on emotional self-reports and psychophysiological responses to dissonance</article-title>
.
<source>Psychophysiology</source>
<volume>48</volume>
,
<fpage>337</fpage>
<lpage>349</lpage>
<pub-id pub-id-type="doi">10.1111/j.1469-8986.2010.01075.x</pub-id>
<pub-id pub-id-type="pmid">20701708</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Doll</surname>
<given-names>B. B.</given-names>
</name>
<name>
<surname>Hutchison</surname>
<given-names>K. E.</given-names>
</name>
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Dopaminergic genes predict individual differences in susceptibility to confirmation bias</article-title>
.
<source>J. Neurosci</source>
.
<volume>31</volume>
,
<fpage>6188</fpage>
<lpage>6198</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.6486-10.2011</pub-id>
<pub-id pub-id-type="pmid">21508242</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dubé</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Le Bel</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The categorical structure of pleasure</article-title>
.
<source>Cogn. Emot</source>
.
<volume>17</volume>
,
<fpage>263</fpage>
<lpage>297</lpage>
<pub-id pub-id-type="doi">10.1080/02699930302295</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eerola</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Vuoskoski</surname>
<given-names>J. K.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>A comparison of the discrete and dimensional models of emotion in music</article-title>
.
<source>Psychol. Music</source>
<volume>39</volume>
,
<fpage>18</fpage>
<lpage>49</lpage>
<pub-id pub-id-type="doi">10.1177/0305735610362821</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fabry</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Giesler</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Novice medical students: individual patterns in the use of learning strategies and how they change during the first academic year</article-title>
.
<source>GMS Z. Med. Ausbild</source>
.
<volume>29</volume>
,
<fpage>Doc56</fpage>
<pub-id pub-id-type="doi">10.3205/zma000826</pub-id>
<pub-id pub-id-type="pmid">22916082</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Doll</surname>
<given-names>B. B.</given-names>
</name>
<name>
<surname>Oas-Terpstra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Moreno</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Prefrontal and striatal dopaminergic genes predict individual differences in exploration and exploitation</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>12</volume>
,
<fpage>1062</fpage>
<lpage>1068</lpage>
<pub-id pub-id-type="doi">10.1038/nn.2342</pub-id>
<pub-id pub-id-type="pmid">19620978</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Moustafa</surname>
<given-names>A. A.</given-names>
</name>
<name>
<surname>Haughey</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Curran</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Hutchison</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2007a</year>
).
<article-title>Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A</source>
.
<volume>104</volume>
,
<fpage>16311</fpage>
<lpage>16316</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.0706111104</pub-id>
<pub-id pub-id-type="pmid">17913879</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Samanta</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Moustafa</surname>
<given-names>A. A.</given-names>
</name>
<name>
<surname>Sherman</surname>
<given-names>S. J.</given-names>
</name>
</person-group>
(
<year>2007b</year>
).
<article-title>Hold your horses: impulsivity, deep brain stimulation and medication in Parkinsonism</article-title>
.
<source>Science</source>
<volume>318</volume>
,
<fpage>1309</fpage>
<lpage>1312</lpage>
<pub-id pub-id-type="doi">10.1126/science.1146157</pub-id>
<pub-id pub-id-type="pmid">17962524</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Seeberger</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>O'Reilly</surname>
<given-names>R. C.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>By carrot or by stick: cognitive reinforcement learning in Parkinsonism</article-title>
.
<source>Science</source>
<volume>306</volume>
,
<fpage>1940</fpage>
<lpage>1943</lpage>
<pub-id pub-id-type="doi">10.1126/science.1102941</pub-id>
<pub-id pub-id-type="pmid">15528409</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gaser</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Schlaug</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Brain structures differ between musicians and non-musicians</article-title>
.
<source>J. Neurosci</source>
.
<volume>23</volume>
,
<fpage>9240</fpage>
<lpage>9245</lpage>
<pub-id pub-id-type="pmid">14534258</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grob</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Pizzagalli</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Dutra</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Stern</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mörgeli</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Milos</surname>
<given-names>G.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2012</year>
).
<article-title>Dopamine-related deficit in reward learning after catecholamine depletion in unmedicated, remitted subjects with bulimia nervosa</article-title>
.
<source>Neuropsychopharmacology</source>
<volume>37</volume>
,
<fpage>1945</fpage>
<lpage>1952</lpage>
<pub-id pub-id-type="doi">10.1038/npp.2012.41</pub-id>
<pub-id pub-id-type="pmid">22491353</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hollerman</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Schultz</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Dopamine neurons report an error in the temporal prediction of reward during learning</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>1</volume>
,
<fpage>304</fpage>
<lpage>309</lpage>
<pub-id pub-id-type="doi">10.1038/1124</pub-id>
<pub-id pub-id-type="pmid">10195164</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Huron</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<source>Sweet Anticipation: Music and the Psychology of Expectation</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hyde</surname>
<given-names>K. L.</given-names>
</name>
<name>
<surname>Lerch</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Norton</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Forgeard</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Winner</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2009</year>
).
<article-title>Musical training shapes structural brain development</article-title>
.
<source>J. Neurosci</source>
.
<volume>29</volume>
,
<fpage>3019</fpage>
<lpage>3025</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.5118-08.2009</pub-id>
<pub-id pub-id-type="pmid">19279238</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Istók</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Brattico</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Jacobsen</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Krohn</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Müller</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Aesthetic responses to music: a questionnaire study</article-title>
.
<source>Music. Sci</source>
.
<volume>13</volume>
,
<fpage>183</fpage>
<lpage>206</lpage>
<pub-id pub-id-type="doi">10.1177/102986490901300201</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jocham</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Klein</surname>
<given-names>T. A.</given-names>
</name>
<name>
<surname>Ullsperger</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Dopamine-mediated reinforcement learning signals in the striatum and ventromedial prefrontal cortex underlie value-based choices</article-title>
.
<source>J. Neurosci</source>
.
<volume>31</volume>
,
<fpage>1606</fpage>
<lpage>1613</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3904-10.2011</pub-id>
<pub-id pub-id-type="pmid">21289169</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Schröger</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Superior pre-attentive auditory processing in musicians</article-title>
.
<source>Neuroreport</source>
<volume>10</volume>
,
<fpage>1309</fpage>
<lpage>1313</lpage>
<pub-id pub-id-type="doi">10.1097/00001756-199904260-00029</pub-id>
<pub-id pub-id-type="pmid">10363945</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koeneke</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lutz</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Wustenberg</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Jancke</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Long-term training affects cerebellar processing in skilled keyboard players</article-title>
.
<source>Neuroreport</source>
<volume>15</volume>
,
<fpage>1279</fpage>
<lpage>1282</lpage>
<pub-id pub-id-type="doi">10.1097/01.wnr.0000127463.10147.e7</pub-id>
<pub-id pub-id-type="pmid">15167549</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kühn</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gallinat</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The neural correlates of subjective pleasantness</article-title>
.
<source>Neuroimage</source>
<volume>61</volume>
,
<fpage>289</fpage>
<lpage>294</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2012.02.065</pub-id>
<pub-id pub-id-type="pmid">22406357</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>LeDoux</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Rethinking the emotional brain</article-title>
.
<source>Neuron</source>
<volume>73</volume>
,
<fpage>653</fpage>
<lpage>676</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2012.02.004</pub-id>
<pub-id pub-id-type="pmid">22365542</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Levitin</surname>
<given-names>D. J.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>What does it mean to be musical?</article-title>
<source>Neuron</source>
<volume>73</volume>
,
<fpage>633</fpage>
<lpage>637</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2012.01.017</pub-id>
<pub-id pub-id-type="pmid">22365540</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lindquist</surname>
<given-names>K. A.</given-names>
</name>
<name>
<surname>Wager</surname>
<given-names>T. D.</given-names>
</name>
<name>
<surname>Kober</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Bliss-Moreau</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Barrett</surname>
<given-names>L. F.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The brain basis of emotion: a meta-analytic review</article-title>
.
<source>Behav. Brain Sci</source>
.
<volume>35</volume>
,
<fpage>121</fpage>
<lpage>202</lpage>
<pub-id pub-id-type="doi">10.1017/S0140525X11000446</pub-id>
<pub-id pub-id-type="pmid">22617651</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Menon</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Levitin</surname>
<given-names>D. J.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>The rewards of music listening: response and physiological connectivity of the mesolimbic system</article-title>
.
<source>Neuroimage</source>
<volume>28</volume>
,
<fpage>175</fpage>
<lpage>184</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2005.05.053</pub-id>
<pub-id pub-id-type="pmid">16023376</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>L. B.</given-names>
</name>
</person-group>
(
<year>1956</year>
).
<source>Emotions and Meaning in Music</source>
.
<publisher-loc>Chicago, IL</publisher-loc>
:
<publisher-name>Chicago University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Montague</surname>
<given-names>P. R.</given-names>
</name>
<name>
<surname>Dayan</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Sejnowski</surname>
<given-names>T. J.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>A framework for mesencephalic dopamine systems based on predictive Hebbian learning</article-title>
.
<source>J. Neurosci</source>
.
<volume>16</volume>
,
<fpage>1936</fpage>
<lpage>1947</lpage>
<pub-id pub-id-type="pmid">8774460</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Müller</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Höfel</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Brattico</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Jacobsen</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Aesthetic judgments of music in experts and laypersons—an ERP study</article-title>
.
<source>Int. J. Psychophysiol</source>
.
<volume>76</volume>
,
<fpage>40</fpage>
<lpage>51</lpage>
<pub-id pub-id-type="doi">10.1016/j.ijpsycho.2010.02.002</pub-id>
<pub-id pub-id-type="pmid">20153786</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Niv</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Daw</surname>
<given-names>N. D.</given-names>
</name>
<name>
<surname>Joel</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Dayan</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Tonic dopamine: opportunity costs and the control of response vigor</article-title>
.
<source>Psychopharmacology</source>
<volume>191</volume>
,
<fpage>507</fpage>
<lpage>520</lpage>
<pub-id pub-id-type="doi">10.1007/s00213-006-0502-4</pub-id>
<pub-id pub-id-type="pmid">17031711</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>O'Doherty</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Dayan</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Schultz</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Deichmann</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Friston</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Dolan</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Dissociable roles of ventral and dorsal striatum in instrumental conditioning</article-title>
.
<source>Science</source>
<volume>304</volume>
,
<fpage>452</fpage>
<lpage>454</lpage>
<pub-id pub-id-type="doi">10.1126/science.1094285</pub-id>
<pub-id pub-id-type="pmid">15087550</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oechslin</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Van De Ville</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Lazeyras</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Hauert</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>James</surname>
<given-names>C. E.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Degree of musical expertise modulates higher order brain functioning</article-title>
.
<source>Cereb. Cortex</source>
<volume>23</volume>
,
<fpage>2213</fpage>
<lpage>2224</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhs206</pub-id>
<pub-id pub-id-type="pmid">22832388</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Overton</surname>
<given-names>D. A.</given-names>
</name>
</person-group>
(
<year>1966</year>
).
<article-title>State-dependent learning produced by depressant and atropine-like drugs</article-title>
.
<source>Psychopharmacologia</source>
<volume>10</volume>
,
<fpage>6</fpage>
<lpage>31</lpage>
<pub-id pub-id-type="doi">10.1007/BF00401896</pub-id>
<pub-id pub-id-type="pmid">5982984</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pacchetti</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Mancini</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Aglieri</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Fundaró</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Martignoni</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Nappi</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Active music therapy in Parkinson's disease: an integrative method for motor and emotional rehabilitation</article-title>
.
<source>Psychosom. Med</source>
.
<volume>62</volume>
,
<fpage>386</fpage>
<lpage>393</lpage>
<pub-id pub-id-type="pmid">10845352</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pereira</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>Teixeira</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Figueiredo</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Xavier</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Castro</surname>
<given-names>S. L.</given-names>
</name>
<name>
<surname>Brattico</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Music and emotions in the brain: familiarity matters</article-title>
.
<source>PLoS ONE</source>
<volume>6</volume>
:
<fpage>e27241</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0027241</pub-id>
<pub-id pub-id-type="pmid">22110619</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rauscher</surname>
<given-names>F. H.</given-names>
</name>
<name>
<surname>Shaw</surname>
<given-names>G. L.</given-names>
</name>
<name>
<surname>Ky</surname>
<given-names>K. N.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>Music and spatial task performance</article-title>
.
<source>Nature</source>
<volume>365</volume>
,
<fpage>611</fpage>
<pub-id pub-id-type="doi">10.1038/365611a0</pub-id>
<pub-id pub-id-type="pmid">8413624</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rentfrow</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Gosling</surname>
<given-names>S. D.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The do re mi's of everyday life: the structure and personality correlates of music preferences</article-title>
.
<source>J. Pers. Soc. Psychol</source>
.
<volume>84</volume>
,
<fpage>1236</fpage>
<lpage>1256</lpage>
<pub-id pub-id-type="doi">10.1037/0022-3514.84.6.1236</pub-id>
<pub-id pub-id-type="pmid">12793587</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Robinson</surname>
<given-names>T. E.</given-names>
</name>
<name>
<surname>Berridge</surname>
<given-names>K. C.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>The neural basis of drug craving: an incentive-sensitization theory of addiction</article-title>
.
<source>Brain Res. Rev</source>
.
<volume>18</volume>
,
<fpage>247</fpage>
<lpage>291</lpage>
<pub-id pub-id-type="doi">10.1016/0165-0173(93)90013-P</pub-id>
<pub-id pub-id-type="pmid">8401595</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rodrigues</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Loureiro</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Caramelli</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Musical training, neuroplasticity and cognition</article-title>
.
<source>Dement. Neuropsychol</source>
.
<volume>4</volume>
,
<fpage>277</fpage>
<lpage>286</lpage>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Saarikallio</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Erkkilä</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>The role of music in adolescents' mood regulation</article-title>
.
<source>Psychol. Music</source>
<volume>35</volume>
,
<fpage>88</fpage>
<lpage>109</lpage>
<pub-id pub-id-type="doi">10.1177/0305735607068889</pub-id>
<pub-id pub-id-type="pmid">20671333</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Salimpoor</surname>
<given-names>V. N.</given-names>
</name>
<name>
<surname>Benovoy</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Larcher</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Dagher</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Anatomically distinct dopamine release during anticipation and experience of peak emotion to music</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>14</volume>
,
<fpage>257</fpage>
<lpage>262</lpage>
<pub-id pub-id-type="doi">10.1038/nn.2726</pub-id>
<pub-id pub-id-type="pmid">21217764</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Salimpoor</surname>
<given-names>V. N.</given-names>
</name>
<name>
<surname>Benevoy</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Longo</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Cooperstock</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The rewarding aspects of music listening are related to degree of emotional arousal</article-title>
.
<source>PLoS ONE</source>
<volume>4</volume>
:
<fpage>e7487</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0007487</pub-id>
<pub-id pub-id-type="pmid">19834599</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Salimpoor</surname>
<given-names>V. N.</given-names>
</name>
<name>
<surname>van den Bosch</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Kovacevic</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>McIntosh</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Dagher</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Interactions between nucleus accumbens and auditory cortices predict music reward value</article-title>
.
<source>Science</source>
<volume>340</volume>
,
<fpage>216</fpage>
<lpage>219</lpage>
<pub-id pub-id-type="doi">10.1126/science.1231059</pub-id>
<pub-id pub-id-type="pmid">23580531</pub-id>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schott</surname>
<given-names>B. H.</given-names>
</name>
<name>
<surname>Minuzzi</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Krebs</surname>
<given-names>R. M.</given-names>
</name>
<name>
<surname>Elmenhorst</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Lang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Winz</surname>
<given-names>O. H.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2008</year>
).
<article-title>Mesolimbic functional magnetic resonance imaging activations during reward anticipation correlate with reward-related ventral striatal dopamine release</article-title>
.
<source>J. Neurosci</source>
.
<volume>28</volume>
,
<fpage>14311</fpage>
<lpage>14319</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2058-08.2008</pub-id>
<pub-id pub-id-type="pmid">19109512</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schultz</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Getting formal with dopamine and reward</article-title>
.
<source>Neuron</source>
<volume>36</volume>
,
<fpage>241</fpage>
<lpage>263</lpage>
<pub-id pub-id-type="doi">10.1016/S0896-6273(02)00967-4</pub-id>
<pub-id pub-id-type="pmid">12383780</pub-id>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seger</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>Spiering</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Sares</surname>
<given-names>A. G.</given-names>
</name>
<name>
<surname>Quraini</surname>
<given-names>S. I.</given-names>
</name>
<name>
<surname>Alpeter</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>David</surname>
<given-names>J.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2013</year>
).
<article-title>Corticostriatal contributions to musical expectancy perception</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>25</volume>
,
<fpage>1062</fpage>
<lpage>1077</lpage>
<pub-id pub-id-type="doi">10.1162/jocn_a_00371</pub-id>
<pub-id pub-id-type="pmid">23410032</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shiner</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Seymour</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Wunderlich</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Hill</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Bhatia</surname>
<given-names>K. P.</given-names>
</name>
<name>
<surname>Dayan</surname>
<given-names>P.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2012</year>
).
<article-title>Dopamine and performance in a reinforcement learning task: evidence from Parkinson's disease</article-title>
.
<source>Brain</source>
<volume>135</volume>
,
<fpage>1871</fpage>
<lpage>1883</lpage>
<pub-id pub-id-type="doi">10.1093/brain/aws083</pub-id>
<pub-id pub-id-type="pmid">22508958</pub-id>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sloboda</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Music structure and emotional response: some empirical findings</article-title>
.
<source>Psychol. Music</source>
<volume>19</volume>
,
<fpage>110</fpage>
<lpage>120</lpage>
<pub-id pub-id-type="doi">10.1177/0305735691192002</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Sloboda</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Juslin</surname>
<given-names>P. N.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Psychological perspectives on music and emotion</article-title>
, in
<source>Music and Emotion: Theory and Research</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Juslin</surname>
<given-names>P. N.</given-names>
</name>
<name>
<surname>Sloboda</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>415</fpage>
<lpage>430</lpage>
</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Sloboda</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>O'Neill</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Emotions in everyday listening to music</article-title>
, in
<source>Music and Emotion: Theory and Research</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Juslin</surname>
<given-names>P. N.</given-names>
</name>
<name>
<surname>Sloboda</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>415</fpage>
<lpage>430</lpage>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Castaneda</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Knoll</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Uther</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Sound processing in amateur musicians and nonmusicians: event-related potential and behavioral indices</article-title>
.
<source>Neuroreport</source>
<volume>17</volume>
,
<fpage>1225</fpage>
<lpage>1228</lpage>
<pub-id pub-id-type="doi">10.1097/01.wnr.0000230510.55596.8b</pub-id>
<pub-id pub-id-type="pmid">16837859</pub-id>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tervaniemi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Just</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Widmann</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Schröger</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Pitch discrimination accuracy in musicians vs. nonmusicians: an event-related potential and behavioral study</article-title>
.
<source>Exp. Brain Res</source>
.
<volume>161</volume>
,
<fpage>1</fpage>
<lpage>10</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-004-2044-5</pub-id>
<pub-id pub-id-type="pmid">15551089</pub-id>
</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van de Cruys</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Wagemans</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Putting reward in art: a tentative prediction error account of visual art</article-title>
.
<source>Iperception</source>
<volume>2</volume>
,
<fpage>1035</fpage>
<lpage>1062</lpage>
<pub-id pub-id-type="pmid">23145260</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Vuust</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kringelbach</surname>
<given-names>M. L.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>The pleasure of music</article-title>
, in
<source>Pleasures of the Brain</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Kringelbach</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Berridge</surname>
<given-names>K. C.</given-names>
</name>
</person-group>
(
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>255</fpage>
<lpage>269</lpage>
</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Worbe</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Palminteri</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hartmann</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Vidailhet</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lehéricy</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Pessiglione</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Reinforcement learning and Gilles de la Tourette syndrome: dissociation of clinical phenotypes and pharmacological treatments</article-title>
.
<source>Arch. Gen. Psychiatry</source>
<volume>68</volume>
,
<fpage>1257</fpage>
<lpage>1266</lpage>
<pub-id pub-id-type="doi">10.1001/archgenpsychiatry.2011.137</pub-id>
<pub-id pub-id-type="pmid">22147843</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Neural mechanisms underlying melodic perception and memory for pitch</article-title>
.
<source>J. Neurosci</source>
.
<volume>14</volume>
,
<fpage>1908</fpage>
<lpage>1919</lpage>
<pub-id pub-id-type="pmid">8158246</pub-id>
</mixed-citation>
</ref>
</ref-list>
<app-group>
<app id="A1">
<title>Appendix</title>
<table-wrap id="TA1" position="anchor">
<label>Table A1</label>
<caption>
<p>
<bold>Musical stimuli</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Excerpt</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Soundtrack</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Track</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Title</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Artist</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Length</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">Pride and prejudice</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">4:49 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">9</td>
<td align="left" rowspan="1" colspan="1">Liz on top of the world</td>
<td align="left" rowspan="1" colspan="1">Jean-Yves Thibaudet</td>
<td align="left" rowspan="1" colspan="1">1:24</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">13</td>
<td align="left" rowspan="1" colspan="1">Darcy's letter</td>
<td align="left" rowspan="1" colspan="1">Various</td>
<td align="left" rowspan="1" colspan="1">3:26</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">Pride and prejudice</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">7:20 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">The living sculptures of pemberley</td>
<td align="left" rowspan="1" colspan="1">Various</td>
<td align="left" rowspan="1" colspan="1">3:04</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">15</td>
<td align="left" rowspan="1" colspan="1">Your hands are cold</td>
<td align="left" rowspan="1" colspan="1">Jean-Yves Thibaudet</td>
<td align="left" rowspan="1" colspan="1">4:21</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">Juha</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">3:08 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">16</td>
<td align="left" rowspan="1" colspan="1">Kevät</td>
<td align="left" rowspan="1" colspan="1">Anssi Tikanmäki</td>
<td align="left" rowspan="1" colspan="1">1:01</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">19</td>
<td align="left" rowspan="1" colspan="1">Rakkauden Uhrit</td>
<td align="left" rowspan="1" colspan="1">Anssi Tikanmäki</td>
<td align="left" rowspan="1" colspan="1">2:41</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">Lethal weapon 3</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">Lorna—a quiet evening by the fire</td>
<td align="left" rowspan="1" colspan="1">Michael Kamen/Eric Clapton</td>
<td align="left" rowspan="1" colspan="1">3:33 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">Shakespeare in love</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">7:54 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">Viola's Audition</td>
<td align="left" rowspan="1" colspan="1">Nick ingman/Gavyn wright</td>
<td align="left" rowspan="1" colspan="1">3:22</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">A plague on both your houses</td>
<td align="left" rowspan="1" colspan="1">Nick ingman/Gavyn wright</td>
<td align="left" rowspan="1" colspan="1">1:40</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">In Viola's room</td>
<td align="left" rowspan="1" colspan="1">Nick ingman/Gavyn Wright</td>
<td align="left" rowspan="1" colspan="1">2:54</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">Dances with wolves</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">4:15 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">The John Dunbar theme</td>
<td align="left" rowspan="1" colspan="1">John Barry</td>
<td align="left" rowspan="1" colspan="1">2:17</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">Ride to fort hays</td>
<td align="left" rowspan="1" colspan="1">John Barry</td>
<td align="left" rowspan="1" colspan="1">2:01</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">Big fish</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">5:39 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">Pictures</td>
<td align="left" rowspan="1" colspan="1">Danny Elfman</td>
<td align="left" rowspan="1" colspan="1">0:45</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">11</td>
<td align="left" rowspan="1" colspan="1">Underwater</td>
<td align="left" rowspan="1" colspan="1">Danny Elfman</td>
<td align="left" rowspan="1" colspan="1">1:53</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="left" rowspan="1" colspan="1">In the Tub</td>
<td align="left" rowspan="1" colspan="1">Danny Elfman</td>
<td align="left" rowspan="1" colspan="1">1:18</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">22</td>
<td align="left" rowspan="1" colspan="1">Jenny's Theme</td>
<td align="left" rowspan="1" colspan="1">Danny Elfman</td>
<td align="left" rowspan="1" colspan="1">1:45</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">Shine</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">3:49 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="left" rowspan="1" colspan="1">As if there was no tomorrow</td>
<td align="left" rowspan="1" colspan="1">David Helfgott</td>
<td align="left" rowspan="1" colspan="1">1:46</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">28</td>
<td align="left" rowspan="1" colspan="1">Goodnight daddy</td>
<td align="left" rowspan="1" colspan="1">David Helfgott</td>
<td align="left" rowspan="1" colspan="1">2:05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">9</td>
<td align="left" rowspan="1" colspan="1">Pride and prejudice</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">4:31 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">Dawn</td>
<td align="left" rowspan="1" colspan="1">Various</td>
<td align="left" rowspan="1" colspan="1">2:40</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="left" rowspan="1" colspan="1">The secret life of daydreams</td>
<td align="left" rowspan="1" colspan="1">Jean-Yves Thibaudet</td>
<td align="left" rowspan="1" colspan="1">1:56</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">Portrait of a lady</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">7:04 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">Flowers of Firenze</td>
<td align="left" rowspan="1" colspan="1">Wojciech Kilar</td>
<td align="left" rowspan="1" colspan="1">4:02</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">Twilight Cellos</td>
<td align="left" rowspan="1" colspan="1">Wojciech Kilar</td>
<td align="left" rowspan="1" colspan="1">3:07</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">11</td>
<td align="left" rowspan="1" colspan="1">Oliver twist</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">The road to the workhouse</td>
<td align="left" rowspan="1" colspan="1">Rachel Portman</td>
<td align="left" rowspan="1" colspan="1">3:03 (Total)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="left" rowspan="1" colspan="1">The last samurai</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">A way of life</td>
<td align="left" rowspan="1" colspan="1">Hans Zimmer</td>
<td align="left" rowspan="1" colspan="1">8:04 (Total)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">13</td>
<td align="left" rowspan="1" colspan="1">Dances with wolves</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">5:56 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">Kicking bird's gift</td>
<td align="left" rowspan="1" colspan="1">John Barry</td>
<td align="left" rowspan="1" colspan="1">2:11</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="left" rowspan="1" colspan="1">The love theme</td>
<td align="left" rowspan="1" colspan="1">John Barry</td>
<td align="left" rowspan="1" colspan="1">3:46</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">14</td>
<td align="left" rowspan="1" colspan="1">Band of brothers</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">6:22 (Total)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="left" rowspan="1" colspan="1">Headscarf</td>
<td align="left" rowspan="1" colspan="1">Michael Kamen</td>
<td align="left" rowspan="1" colspan="1">4:12</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">15</td>
<td align="left" rowspan="1" colspan="1">Preparing for patrol</td>
<td align="left" rowspan="1" colspan="1">Michael Kamen</td>
<td align="left" rowspan="1" colspan="1">2:13</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="TA2" position="anchor">
<label>Table A2</label>
<caption>
<p>
<bold>Helsinki inventory of music and affective behaviors</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>A. MUSICAL TRAINING</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1a. Have you learned to play an instrument or been in a choir? (Yes or No)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">If you answered “No,” please continue to the “Listening to music” section.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1b. How many years have you taken instrumental or singing lessons?</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1c. How old were you when you started learning an instrument (including voice)?</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1d. If you learned to play an instrument (including voice) and then stopped, how old were you when you stopped?</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2a. Are you or were you a professional musician or music student? (Yes or No)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">If you answered “No,” please continue to question #3.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2b. What was your main instrument? Did you play other instruments?</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2c. How many years have you played/did you play music professionally or as a student?</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3. Currently, how much time per week do you practice or play one or more instruments or sing?</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4. Which of the following describes you the best? (Write in one or more musical styles that best describes your musicianship).</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Pop/jazz/heavy/folk/classical/________ musician</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Pop/jazz/heavy/folk/classical/________ musical enthusiast/amateur</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Write your genre(s) here:</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>B. LISTENING TO MUSIC</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1. How often do you actively listen to music (without doing something else at the same time)?</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Never</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Once per year</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Once per month</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2–3 times per month</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Once per week</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2–3 times per week</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">More often (How many hours per week?)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2. How often do you listen to music passively (e.g., while you are cleaning, etc.)?</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Never</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Once per year</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Once per month</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2–3 times per month</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Once per week</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2–3 times per week</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">More often (How many hours per week?)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3. Please evaluate how important music is in your daily life</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Not at all important 1 ——— 2 ——– 3 ——– 4 ——— 5 ——– 6 ——— 7 Very important</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>C. MUSIC CONSUMPTION (ADAPTED FROM Chamorro-Premuzic et al.,
<xref ref-type="bibr" rid="B15">2012</xref>
)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Using the scale below, please indicate how frequently you engage in each of the following activities.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Very rarely 1 ——— 2 ——– 3 ——– 4 ——— 5 ——– 6 ——— 7 Very often</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">I purchase or download music…</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">I attend musical concerts or recitals…</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>D. USES OF MUSIC (FROM Chamorro-Premuzic and Furnham,
<xref ref-type="bibr" rid="B14">2007</xref>
)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Using the scale below, please indicate the extent to which you agree or disagree with each of the following activities. Please write a number after each activity.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Strongly disagree 1 ——— 2 ——– 3 ——– 4 ——— 5 ——– 6 ——— 7 Strongly agree</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1. Listening to music really affects my mood.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2. I am not very nostalgic when I listen to old songs I used to listen to.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3. Whenever I want to feel happy I listen to a happy song.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4. When I listen to sad songs I feel very emotional.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5. Almost every memory I have is associated with a particular song.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6. I often enjoy analyzing complex musical compositions.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">7. I seldom like a song unless I admire the technique of the musicians.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">8. I don't enjoy listening to pop music because it's very primitive.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">9. Rather than relaxing, when I listen to music I like to concentrate on it.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">10. Listening to music is an intellectual experience for me.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">11. I enjoy listening to music while I work.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">12. Music is very distracting so whenever I study I need to have silence.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">13. If I don't listen to music while I'm doing something, I often get bored.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">14. I enjoy listening to music in social events.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">15. I often feel very lonely if I don't listen to music.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>E. MUSIC-DIRECTED ATTENTION SCALE (FROM Kantor-Martynuska and Fajkowska, in preparation)</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">These questions regard listening to music at a medium volume. For each sentence, choose the answer that is more relevant to your experience. Respond quickly, according to the first decision that comes to your mind. (Agree or Disagree)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1. When I eat out, music playing in the background is of no importance to me.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2. I turn off my music and go out only after the piece of music I'm listening to has finished.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3. When I have a difficult mathematics task to do, music disturbs me.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4. Background music diverts my attention from what another person is saying to me.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5. I don't mind if I have to stop a piece of music halfway through.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6. When I eat, inappropriate music disturbs me.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">7. When I hear someone else's music playing through his/her earphones, I can detach myself from the music if I want.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">8. When I have to write an essay, I do it with the music on.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">9. Even when I am concentrating on something, I like to have the music on.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">10. In a conversation, I can be distracted by music playing in the background.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">11. When I study for an exam, music playing in another room distracts me.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">12. When I hear music, I find it hard not to listen to it attentively.</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">13. I am more effective when I study in silence than with the music on.</td>
</tr>
</tbody>
</table>
</table-wrap>
</app>
</app-group>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Sarre/explor/MusicSarreV3/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000145 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000145 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Sarre
   |area=    MusicSarreV3
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:3748532
   |texte=   Pleasurable music affects reinforcement learning according to the listener
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:23970875" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a MusicSarreV3 

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Sun Jul 15 18:16:09 2018. Site generation: Tue Mar 5 19:21:25 2024