Serveur d'exploration sur l'opéra

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Effects of Unexpected Chords and of Performer's Expression on Brain Responses and Electrodermal Activity

Identifieur interne : 000506 ( Ncbi/Merge ); précédent : 000505; suivant : 000507

Effects of Unexpected Chords and of Performer's Expression on Brain Responses and Electrodermal Activity

Auteurs : Stefan Koelsch [Royaume-Uni] ; Simone Kilches ; Nikolaus Steinbeis ; Stefanie Schelinski

Source :

RBID : PMC:2435625

Abstract

Background

There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown.

Methodology/Principal Findings

This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition.

Conclusions/Significance

These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music.


Url:
DOI: 10.1371/journal.pone.0002631
PubMed: 18612459
PubMed Central: 2435625

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:2435625

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Effects of Unexpected Chords and of Performer's Expression on Brain Responses and Electrodermal Activity</title>
<author>
<name sortKey="Koelsch, Stefan" sort="Koelsch, Stefan" uniqKey="Koelsch S" first="Stefan" last="Koelsch">Stefan Koelsch</name>
<affiliation wicri:level="4">
<nlm:aff id="aff1">
<addr-line>Department of Psychology, University of Sussex, Brighton, United Kingdom</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Psychology, University of Sussex, Brighton</wicri:regionArea>
<orgName type="university">Université du Sussex</orgName>
<placeName>
<settlement type="city">Brighton</settlement>
<settlement type="town">Falmer</settlement>
<region type="nation">Angleterre</region>
<region nuts="2" type="region">Sussex de l'Est</region>
</placeName>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Junior Research Group
<italic>Neurocognition of Music</italic>
, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Kilches, Simone" sort="Kilches, Simone" uniqKey="Kilches S" first="Simone" last="Kilches">Simone Kilches</name>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Junior Research Group
<italic>Neurocognition of Music</italic>
, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Steinbeis, Nikolaus" sort="Steinbeis, Nikolaus" uniqKey="Steinbeis N" first="Nikolaus" last="Steinbeis">Nikolaus Steinbeis</name>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Junior Research Group
<italic>Neurocognition of Music</italic>
, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Schelinski, Stefanie" sort="Schelinski, Stefanie" uniqKey="Schelinski S" first="Stefanie" last="Schelinski">Stefanie Schelinski</name>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Junior Research Group
<italic>Neurocognition of Music</italic>
, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">18612459</idno>
<idno type="pmc">2435625</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2435625</idno>
<idno type="RBID">PMC:2435625</idno>
<idno type="doi">10.1371/journal.pone.0002631</idno>
<date when="2008">2008</date>
<idno type="wicri:Area/Pmc/Corpus">000C87</idno>
<idno type="wicri:Area/Pmc/Curation">000C87</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000373</idno>
<idno type="wicri:Area/Ncbi/Merge">000506</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Effects of Unexpected Chords and of Performer's Expression on Brain Responses and Electrodermal Activity</title>
<author>
<name sortKey="Koelsch, Stefan" sort="Koelsch, Stefan" uniqKey="Koelsch S" first="Stefan" last="Koelsch">Stefan Koelsch</name>
<affiliation wicri:level="4">
<nlm:aff id="aff1">
<addr-line>Department of Psychology, University of Sussex, Brighton, United Kingdom</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Psychology, University of Sussex, Brighton</wicri:regionArea>
<orgName type="university">Université du Sussex</orgName>
<placeName>
<settlement type="city">Brighton</settlement>
<settlement type="town">Falmer</settlement>
<region type="nation">Angleterre</region>
<region nuts="2" type="region">Sussex de l'Est</region>
</placeName>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Junior Research Group
<italic>Neurocognition of Music</italic>
, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Kilches, Simone" sort="Kilches, Simone" uniqKey="Kilches S" first="Simone" last="Kilches">Simone Kilches</name>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Junior Research Group
<italic>Neurocognition of Music</italic>
, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Steinbeis, Nikolaus" sort="Steinbeis, Nikolaus" uniqKey="Steinbeis N" first="Nikolaus" last="Steinbeis">Nikolaus Steinbeis</name>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Junior Research Group
<italic>Neurocognition of Music</italic>
, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Schelinski, Stefanie" sort="Schelinski, Stefanie" uniqKey="Schelinski S" first="Stefanie" last="Schelinski">Stefanie Schelinski</name>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Junior Research Group
<italic>Neurocognition of Music</italic>
, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany</addr-line>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="e-ISSN">1932-6203</idno>
<imprint>
<date when="2008">2008</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<sec>
<title>Background</title>
<p>There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown.</p>
</sec>
<sec>
<title>Methodology/Principal Findings</title>
<p>This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition.</p>
</sec>
<sec>
<title>Conclusions/Significance</title>
<p>These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music.</p>
</sec>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article" xml:lang="EN">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title>PLoS ONE</journal-title>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">18612459</article-id>
<article-id pub-id-type="pmc">2435625</article-id>
<article-id pub-id-type="publisher-id">08-PONE-RA-03256R1</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0002631</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline">
<subject>Neuroscience/Cognitive Neuroscience</subject>
<subject>Neuroscience/Sensory Systems</subject>
<subject>Physiology/Cognitive Neuroscience</subject>
<subject>Physiology/Integrative Physiology</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Effects of Unexpected Chords and of Performer's Expression on Brain Responses and Electrodermal Activity</article-title>
<alt-title alt-title-type="running-head">ERPs and SCRs to Music</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Koelsch</surname>
<given-names>Stefan</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kilches</surname>
<given-names>Simone</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Steinbeis</surname>
<given-names>Nikolaus</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Schelinski</surname>
<given-names>Stefanie</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Department of Psychology, University of Sussex, Brighton, United Kingdom</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Junior Research Group
<italic>Neurocognition of Music</italic>
, Max Planck Institute for Human Cognitive and Brain Science, Leipzig, Germany</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>He</surname>
<given-names>Sheng</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">University of Minnesota, United States of America</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>koelsch@cbs.mpg.de</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: SK SK. Performed the experiments: SK SS. Analyzed the data: SK NS SS. Contributed reagents/materials/analysis tools: SK. Wrote the paper: SK NS. Other: Supervised work: SK.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2008</year>
</pub-date>
<pub-date pub-type="epub">
<day>9</day>
<month>7</month>
<year>2008</year>
</pub-date>
<volume>3</volume>
<issue>7</issue>
<elocation-id>e2631</elocation-id>
<history>
<date date-type="received">
<day>9</day>
<month>1</month>
<year>2008</year>
</date>
<date date-type="accepted">
<day>5</day>
<month>6</month>
<year>2008</year>
</date>
</history>
<copyright-statement>Koelsch et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
<copyright-year>2008</copyright-year>
<abstract>
<sec>
<title>Background</title>
<p>There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown.</p>
</sec>
<sec>
<title>Methodology/Principal Findings</title>
<p>This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition.</p>
</sec>
<sec>
<title>Conclusions/Significance</title>
<p>These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music.</p>
</sec>
</abstract>
<counts>
<page-count count="10"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>During the last two decades, numerous studies have investigated neural correlates of music processing, of which surprisingly few actually used authentic musical stimuli (for exceptions, see, e.g.,
<xref ref-type="bibr" rid="pone.0002631-Blood1">[1]</xref>
<xref ref-type="bibr" rid="pone.0002631-Sammler1">[3]</xref>
). For example, the majority of experiments investigating music-syntactic processing used chord sequences played under computerized control without musical expression, and composed in a fashion which is often hardly reminiscent of natural music (with the purpose to control for acoustical factors, or to present as many stimuli as possible in a relatively short time, e.g.
<xref ref-type="bibr" rid="pone.0002631-Janata1">[4]</xref>
<xref ref-type="bibr" rid="pone.0002631-Koelsch4">[14]</xref>
). With regards to neuroscientific experiments on music-syntactic processing, Koelsch & Mulder
<xref ref-type="bibr" rid="pone.0002631-Koelsch5">[15]</xref>
used chord sequences recorded from classical CDs, but the irregular chords used to investigate harmonic expectancy violations were produced by a pitch-shift of chords, thus not representing natural music-syntactic violations. Similarly, Besson et al.
<xref ref-type="bibr" rid="pone.0002631-Besson1">[16]</xref>
used sung opera-melodies, but the irregular notes were introduced by the investigators and not originally composed that way (see also Besson & Faïta
<xref ref-type="bibr" rid="pone.0002631-Besson2">[17]</xref>
in that study melodies were played by a computer, and irregular notes were introduced by the investigators). To our knowledge, the only study that has investigated brain responses to irregular chords as composed by a composer, employed chorales from J.S. Bach
<xref ref-type="bibr" rid="pone.0002631-Steinbeis1">[18]</xref>
. However, although that study used the unexpected harmonies originally composed by Bach to investigate music-syntactic processing, the chorales were played under computerized control (without musical expression), thus not sounding like natural music (see also Patel et al.
<xref ref-type="bibr" rid="pone.0002631-Patel1">[19]</xref>
, for a study with self-composed music in popular style). It is therefore an open question whether the hypotheses derived from previous neurophysiological studies on music-syntactic processing also apply to natural music.</p>
<p>In the present study, we used excerpts from classical piano sonatas, played by professional pianists, to investigate music-syntactic processing. The excerpts contained a music-syntactically irregular chord as originally composed by the composer. This allowed to test whether brain responses observed in previous studies in response to music-syntactic irregularities (particularly the early right anterior negativity [ERAN] and the N5
<xref ref-type="bibr" rid="pone.0002631-Koelsch1">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Leino1">[20]</xref>
<xref ref-type="bibr" rid="pone.0002631-Miranda1">[23]</xref>
) can also be observed when listening to an authentic, and expressively played, musical stimulus. For purposes of comparison, additional conditions were created in which the originally unexpected chords (as composed by the composers) were rendered harmonically expected, and harmonically very unexpected (see
<xref ref-type="fig" rid="pone-0002631-g001">Figure 1</xref>
).</p>
<fig id="pone-0002631-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0002631.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Examples of experimental stimuli.</title>
<p>First, the original version of a piano sonata was played by a pianist. This original version contained an unexpected chord as arranged by the composer (see middle panel in the lower right). After the recording, the MIDI file with the unexpected (original) chord was modified offline using MIDI software so that the unexpected chord became expected, or very unexpected chord (see top and bottom panels). From each of these three versions, another version without musical expression was created by eliminating variations in tempo and key-stroke velocities (excerpts were modified offline using MIDI software). Thus, there were six versions of each piano sonata: Versions with expected, unexpected, and very unexpected chords, and each of these versions played with and without musical expression.</p>
</caption>
<graphic xlink:href="pone.0002631.g001"></graphic>
</fig>
<p>Moreover, we also produced non-expressive counterparts of the expressively played musical stimuli. These non-expressive stimuli did not contain any variation in tempo (and all notes were played with the same key-stroke velocity), enabling us to compare ERP responses to unexpected harmonies between conditions in which the music was played expressively, or presented without any expression. So far, no ERP study has investigated the influence of musical performance on music perception, and it is not known if neural correlates underlying the processing of syntactic information are influenced by emotional expression. Previous studies have suggested that both ERAN and N5 reflect cognitive, not affective processes (the ERAN the processing of music-syntactic information, and the N5 processes of harmonic integration; e.g.
<xref ref-type="bibr" rid="pone.0002631-Koelsch6">[24]</xref>
). Thus it was expected that neither ERAN nor N5 would be influenced by aspects giving rise to emotion, and that they would thus not differ between the non-expressive and the expressive condition.</p>
<p>Nevertheless, it is widely assumed that irregular musical events (such as music-syntactically irregular chords) give rise to emotional responses. Harmonically unexpected chords may lead to surprise or a feeling of suspense
<xref ref-type="bibr" rid="pone.0002631-Meyer1">[25]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Steinbeis1">[18]</xref>
. In his classic text on musical meaning and emotion, Leonard Meyer
<xref ref-type="bibr" rid="pone.0002631-Meyer1">[25]</xref>
theorized that listeners often have (implicit) expectations of what will happen in the music and, depending on whether these expectations are fulfilled or not, experience relaxation or tension and suspense. A previous study from Steinbeis et al.
<xref ref-type="bibr" rid="pone.0002631-Steinbeis1">[18]</xref>
provided a direct test of this theory, investigating the role of music-specific expectations in the generation of emotional responses in the listener. In that study, unexpected chords elicited not only ERAN and N5 potentials in the EEG, but also an increased skin conductance response (SCR). Because the study from Steinbeis et al.
<xref ref-type="bibr" rid="pone.0002631-Steinbeis1">[18]</xref>
is, to our knowledge, the only study empirically testing a theory about how music evokes emotions (see also
<xref ref-type="bibr" rid="pone.0002631-Juslin1">[26]</xref>
), we aimed to replicate the findings of that study. We therefore also recorded SCRs elicited by expected and unexpected chords with the hypothesis that music-syntactically irregular chords (which are perceived as less expected by listeners) elicit a stronger SCR compared to regular chords. In addition to the SCRs, we also recorded the heart rate (HR) to examine whether sympathetic effects elicited by unexpected harmonies can also be reflected in HR changes.</p>
<p>Additionally, our experimental design also allowed us to compare SCRs (and HR) between the expressive and the non-expressive condition. Previous studies have shown that expressive intentions by performers (such as tension and relaxation) are encoded by expressive cues (for example, tempo and loudness) to communicate emotion in a musical performance (for a review, see
<xref ref-type="bibr" rid="pone.0002631-Juslin2">[27]</xref>
). Because harmonically unexpected chords are widely seen as a means to produce tension (see above), it was expected that such chords are played by performers in a way that produces an emotional response which is larger than when played without musical expression. Thus, we hypothesized that the SCRs elicited by unexpected (as compared to the SCRs elicited by expected chords) would be larger in the expressive than in the non-expressive condition.</p>
<p>In summary, we investigated ERPs, SCRs, and HR in response to unexpected chords (as composed by classical composers) in a condition in which musical excerpts were played with musical expression by a pianist, and in a condition in which these excerpts were played without musical expression by a computer (without variations in tempo or loudness). We hypothesized that unexpected harmonies would elicit an ERAN and an N5, and that both ERPs would not be influenced by musical expression. Moreover, we hypothesized that unexpected chords would elicit stronger SCRs, and increased HR, compared to expected ones. With regards to musical expression, we hypothesized that SCRs elicited by the expressive chords would elicit stronger SCRs than the non-expressive chords, and that the SCR effect of unexpected chords (i.e., SCRs to expected chords subtracted from SCRs to unexpected chords) would be larger in the expressive than in the non-expressive condition.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<sec id="s2a">
<title>Participants</title>
<p>20 individuals (aged 19–29 years, mean 24,7; 10 females) participated in the experiment. Subjects were non-musicians who had not received any formal musical training besides normal school education. All participants had a laterality quotient >90 according to the Edinburgh Handedness Inventory
<xref ref-type="bibr" rid="pone.0002631-Oldfield1">[28]</xref>
. Written informed consent was obtained, the study was approved by the local ethics committee of the University of Leipzig, and conducted in accordance with the Declaration of Helsinki.</p>
</sec>
<sec id="s2b">
<title>Stimuli</title>
<p>Stimuli were excerpts of 8 to 16 s duration, taken from 25 piano sonatas composed by L. v. Beethoven, J. Haydn, W.A. Mozart and F. Schubert. Excerpts were chosen such that they contained a harmonically (slightly) irregular, thus unexpected, chord at the end of the excerpt (usually the onset of a change of key, see
<xref ref-type="fig" rid="pone-0002631-g001">Figure 1</xref>
for an example). Each excerpt was taken from a recording of a longer passage of the respective piano sonata. Passages were played by 4 professional pianists (2 of them female), and recorded using MIDI (musical instrument digital interface) and Cubase SX (Steinberg Media Technologies GmbH, Hamburg, Germany) software.</p>
<p>From the MIDI files of these excerpts (each containing at least one harmonically irregular chord), 25 further MIDI files were generated solely by modifying the tones of the irregular chord in a way that this chord became the harmonically most regular, and thus the most expected, chord (always the tonic chord, see example in
<xref ref-type="fig" rid="pone-0002631-g001">Figure 1</xref>
). This procedure was also performed using Cubase SX. Similarly, 25 further MIDI files were generated by rendering the harmonically irregular chord to a very irregular chord (always a Neapolitan sixth chord, see also
<xref ref-type="fig" rid="pone-0002631-g001">Figure 1</xref>
). Thus, there were three versions of each of the 25 excerpts: (1) the original version with the unexpected chord (as arranged by the composer), (2) the version in which this chord was expected, and (3) the version in which this chord was very unexpected, resulting in a total of 75 excerpts. Note that all versions of one excerpt were played with identical emotional expression, and that the only difference between these three versions of each excerpt was the different chord function (expected, unexpected, very unexpected) of the critical chord.</p>
<p>From each of these 75 MIDI files, another MIDI file without emotional expression was created by eliminating all agogics (i.e., variations in tempo), and by adjusting the key-stroke velocity of all notes to the same value, thus eliminating all dynamics (velocity was set to the mean velocity of the corresponding expressive version). Thus, there were 150 different MIDI files in total: 25 excerpts×3 different chord functions (expected, unexpected, very unexpected) ×2 different emotional expressions (expressive, non-expressive).</p>
<p>Audio files of all MIDI files were generated as wav-files using Cubase SX and The Grand (Steinberg). Moreover, in addition to these 150 experimental stimuli, three additional pieces were taken and edited in the same way as described above (resulting in eighteen different MIDI files), with the exception that one chord of the excerpt (occurring in the beginning, the middle or the end of the piece) was played by another instrument than piano (such as marimba, harpsichord, or violin). These timbre-deviants were used for the task of participants during the EEG and SCR recordings (see next section for details).</p>
<p>To evaluate the impact of the stimulus material on individually perceived emotions, a behavioural experiment was conducted with an independent group of subjects. Twenty-five non-musicians (age range 19–30 years, mean age 24.3 years, 12 females) were presented with the 150 experimental stimuli (no stimuli with timbre-deviants were presented, duration of the behavioural testing session was about 45 min). After each excerpt, participants rated how (un)pleasant, aroused, and surprised they felt during the last musical excerpt. Emotional valence (pleasantness) and arousal were assessed using 9-point scales with Self-Assessment Manikins
<xref ref-type="bibr" rid="pone.0002631-Bradley1">[29]</xref>
, with 1 corresponding to very unpleasant (or very relaxing, respectively) and 9 corresponding to very pleasant (or very arousing, respectively). Surprise was assessed using a 9-point Likert scale, with 1 corresponding to not being surprised at all, and 9 to being very surprised.</p>
<p>Excerpts with expected chords only were rated as most pleasant, least arousing, and least surprising, whereas excerpts with a very unexpected chord were rated as least pleasant, most arousing, and most surprising (ratings for excerpts with an unexpected [original] chord lay between ratings for expected and very unexpected chords with regards to valence, arousal, and surprise, see
<xref ref-type="table" rid="pone-0002631-t001">Table 1</xref>
and
<xref ref-type="fig" rid="pone-0002631-g002">Figure 2</xref>
). An ANOVA on the valence ratings with factors chord (expected, unexpected, very unexpected) and expression (expressive, non-expressive) indicated an effect of chord (
<italic>F</italic>
(2,48) = 12.85,
<italic>p</italic>
<.0001), an effect of expression (
<italic>F</italic>
(1,24) = 22.58,
<italic>p</italic>
<.0001), and no two-way interaction (
<italic>p</italic>
 = .57). Likewise, the analogous ANOVA for the arousal ratings indicated an effect of chord (
<italic>F</italic>
(2,48) = 7.36,
<italic>p</italic>
<.005), an effect of expression (
<italic>F</italic>
(1,24) = 62.31,
<italic>p</italic>
<.0001), and no two-way interaction (
<italic>p</italic>
 = .48). Finally, the analogous ANOVA for the surprise ratings again indicated an effect of chord (
<italic>F</italic>
(2,48) = 13.61,
<italic>p</italic>
<.0001), an effect of expression (
<italic>F</italic>
(1,24) = 83.02,
<italic>p</italic>
<.0001), and no two-way interaction (
<italic>p</italic>
 = .23;
<italic>p</italic>
–values were Greenhouse-Geisser corrected in all three ANOVAs). Paired
<italic>t</italic>
-tests conducted separately for valence, arousal, and surprise ratings indicated that all six experimental conditions (see
<xref ref-type="table" rid="pone-0002631-t001">Table 1B</xref>
) differed significantly from each other (
<italic>p</italic>
≤.05 in all tests), except valence and arousal ratings for unexpected and very unexpected in the expressive condition, and surprise ratings for unexpected and expected in the non-expressive condition.</p>
<fig id="pone-0002631-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0002631.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Average ratings of valence, arousal, and surprise, pooled for expressive and non-expressive excerpts (error bars indicate SEM, 1 corresponded to most unpleasant, least arousing, and least surprising, and 9 to most pleasant, most arousing, and most surprising).</title>
<p>Ratings differed between the three chord types with regards to valence, arousal, and surprise (see text for details).</p>
</caption>
<graphic xlink:href="pone.0002631.g002"></graphic>
</fig>
<table-wrap id="pone-0002631-t001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0002631.t001</object-id>
<label>Table 1</label>
<caption>
<title>Summary of valence-, arousal-, and surprise-ratings (1 corresponded to most unpleasant, least arousing, and least surprising, and 9 to most pleasant, most arousing, and most surprising).</title>
</caption>
<graphic id="pone-0002631-t001-1" xlink:href="pone.0002631.t001"></graphic>
<table frame="hsides" rules="groups" alternate-form-of="pone-0002631-t001-1">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Valence</td>
<td align="left" rowspan="1" colspan="1">Arousal</td>
<td align="left" rowspan="1" colspan="1">Surprise</td>
</tr>
</thead>
<tbody>
<tr>
<td colspan="4" align="left" rowspan="1">
<bold>A</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Expected</td>
<td align="left" rowspan="1" colspan="1">6.08±.21</td>
<td align="left" rowspan="1" colspan="1">4.08±.21</td>
<td align="left" rowspan="1" colspan="1">3.27±.28</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Unexpected (original)</td>
<td align="left" rowspan="1" colspan="1">5.86±.21</td>
<td align="left" rowspan="1" colspan="1">4.26±.21</td>
<td align="left" rowspan="1" colspan="1">3.48±.28</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Very unexpected</td>
<td align="left" rowspan="1" colspan="1">5.70±.20</td>
<td align="left" rowspan="1" colspan="1">4.34±.23</td>
<td align="left" rowspan="1" colspan="1">3.71±.28</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Expressive</td>
<td align="left" rowspan="1" colspan="1">5.67±.21</td>
<td align="left" rowspan="1" colspan="1">4.48±.22</td>
<td align="left" rowspan="1" colspan="1">3.91±.28</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Non-expressive</td>
<td align="left" rowspan="1" colspan="1">6.08±.21</td>
<td align="left" rowspan="1" colspan="1">3.98±.21</td>
<td align="left" rowspan="1" colspan="1">3.06±.28</td>
</tr>
<tr>
<td colspan="4" align="left" rowspan="1">
<bold>B</bold>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Expressive: expected</td>
<td align="left" rowspan="1" colspan="1">5.86±.21</td>
<td align="left" rowspan="1" colspan="1">4.34±.23</td>
<td align="left" rowspan="1" colspan="1">3.64±.28</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Expressive: unexpected</td>
<td align="left" rowspan="1" colspan="1">5.68±.20</td>
<td align="left" rowspan="1" colspan="1">4.55±.22</td>
<td align="left" rowspan="1" colspan="1">3.93±.28</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Expressive: very unexpected</td>
<td align="left" rowspan="1" colspan="1">5.47±.20</td>
<td align="left" rowspan="1" colspan="1">4.57±.24</td>
<td align="left" rowspan="1" colspan="1">4.15±.28</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Non-expressive: expected</td>
<td align="left" rowspan="1" colspan="1">6.29±.22</td>
<td align="left" rowspan="1" colspan="1">3.84±.21</td>
<td align="left" rowspan="1" colspan="1">2.89±.28</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Non-expressive: unexpected</td>
<td align="left" rowspan="1" colspan="1">6.04±.23</td>
<td align="left" rowspan="1" colspan="1">3.98±.21</td>
<td align="left" rowspan="1" colspan="1">3.03±.29</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Non-expressive: very unexpected</td>
<td align="left" rowspan="1" colspan="1">5.92±.21</td>
<td align="left" rowspan="1" colspan="1">4.12±.23</td>
<td align="left" rowspan="1" colspan="1">3.27±.28</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="nt101">
<p>
<bold>A</bold>
shows ratings (mean and SEM) averaged across all excerpts with expected chords only, with an unexpected (original) chord, and a very unexpected chord, as well as ratings averaged across all expressive and all non-expressive excerpts.
<bold>B</bold>
shows ratings (mean and SEM) separately for each of the six experimental conditions.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="s2c">
<title>Procedure</title>
<p>Participants were informed about the chords played with a deviant instrument, asked to detect such chords, and to indicate their detection by pressing a response button. As examples, two sequences with a deviant instrument were presented before the start of the experiment. The deviant instruments were only employed to control whether participants attended the musical stimulus (this method has already been used in previous studies; e.g.,
<xref ref-type="bibr" rid="pone.0002631-Koelsch1">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Koelsch3">[13]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Leino1">[20]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Loui1">[21]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Miranda1">[23]</xref>
). Participants were not informed about the experimental conditions of interest, i.e. neither were they informed about the different chord functions, nor about the manipulations of emotional expression. During the experimental session, participants were instructed to look at a fixation cross. Each excerpt was followed by a silence interval of 1 s. Each stimulus was presented twice during the experiment to increase the signal-to-noise ratio, the ordering of stimuli was pseudo-randomized. The duration of an experimental session was approximately 70 min.</p>
</sec>
<sec id="s2d">
<title>Data Recording and Analysis</title>
<p>The EEG was recorded using Ag/AgCl-electrodes from 32 locations of the extended 10–20-system (FP1, FP2, AFz, AF3, AF4, AF7, AF8, Fz, F3, F4, F7, F8, FC3, FC4, FT7, FT8, Cz, C3, C4, T7, T8, CP5, CP6, Pz, P3, P4, P7, P8, O1, O2, nose-tip, and right mastoid), using an electrode placed on the left mastoid as reference. Sampling rate was 500 Hz. After the measurement, EEG-data were re-referenced to the algebraic mean of the left and right mastoid electrodes (to obtain a symmetric reference), and filtered using a 0.25–25-Hz band-pass filter (1001 points, finite impulse response) to reduce artifacts. Horizontal and vertical electrooculograms (EOGs) were recorded bipolarly.</p>
<p>For measurement of the skin conductance response (SCR) two electrodes were placed on the medial phalanx of the index and middle finger of the left hand. For the calculation of the inter-heartbeat-interval (IBI), an electrocardiogram (ECG) was measured by placing two electrodes at the inner sides of the wrists of the left and the right arm.</p>
<p>For rejection of artifacts in the EEG data, each sampling point was centred in a gliding window and rejected if the standard deviation within the window exceeded a threshold value: Artifacts caused by drifts or body movements were eliminated by rejecting sampling points whenever the standard deviation of a 200-ms or 800-ms gliding window exceeded 25 µV at any EEG electrode. Eye artifacts were rejected whenever the standard deviation of a 200-ms gliding window exceeded 25 µV at the vertical or the horizontal EOG (rejections were controlled by the authors). ERPs were calculated using a 200-ms prestimulus baseline.</p>
<p>The electrodermal activity (EDA) data were visually inspected and checked for artifacts caused by movement or failures of the recording device. Data were rejected whenever there was an unusually steep onset of the EDA.</p>
</sec>
<sec id="s2e">
<title>Data-analysis</title>
<p>For statistical analysis, mean amplitude values were computed for four regions of interest (ROIs): left anterior (F7, F3, FT7, FC3), right anterior (F8, F4, FT8, FC4), left posterior (C3, CP5, P7, P3) and right posterior (C4, CP6, P4, P8).</p>
<p>To test whether ERPs to expected (regular) and unexpected (irregular) chords differ from each other, and whether such differences are lateralized or differ between anterior and posterior scalp regions, amplitude values of ERPs were analyzed statistically by repeated measures ANOVAs. ANOVAs were conducted with factors chord (expected, unexpected, very unexpected), hemisphere (left, right ROIs), and anterior–posterior distribution (anterior, posterior ROIs). Main effects of chord, as well as interactions involving factor chord were adjusted using the Greenhouse-Geisser correction. All statistical analyses of ERPs were computed on the data referenced to the algebraic mean of M1 and M2. The time window for statistical analysis of the ERAN was 140–220 ms, for the N5 500–580 ms. Because the ERAN is defined as the difference between regular (expected) and irregular (unexpected) chords (e.g.
<xref ref-type="bibr" rid="pone.0002631-Koelsch1">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Leino1">[20]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Loui1">[21]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Koelsch3">[13]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Koelsch7">[30]</xref>
), amplitude values and latencies of the ERAN were calculated from the difference ERPs (expected subtracted from unexpected, and expected subtracted from very unexpected chords, respectively). Amplitudes of ERAN and N5 effects (expected subtracted from unexpected, and expected subtracted from very unexpected chords) were tested one-sided according to our hypotheses. To facilitate legibility of ERPs, ERPs were low-pass filtered after statistical evaluation (10 Hz, 41 points, finite impulse response).</p>
</sec>
</sec>
<sec id="s3">
<title>Results</title>
<sec id="s3a">
<title>Behavioural data (timbre detection task)</title>
<p>Participants detected 96.2 percent of the timbre deviants in the expressive, and 95.9 percent in the non-expressive condition (
<italic>p</italic>
 = .98, paired
<italic>t</italic>
-test), indicating that participants attended to the timbre of the musical stimulus, and that they did not have difficulties in reliably detecting the timbre deviants (neither in the expressive, nor in the non-expressive condition).</p>
</sec>
<sec id="s3b">
<title>Electrodermal activity</title>
<p>
<xref ref-type="fig" rid="pone-0002631-g003">Figure 3A</xref>
shows the skin conductance responses (SCRs) to expected, unexpected (original), and very unexpected chords, averaged across all subjects (and across both expressive and non-expressive conditions). Compared to expected chords, both unexpected and very unexpected chords elicited a tonic SCR with an onset of around 500 ms, the SCR being largest for very unexpected chords.
<xref ref-type="fig" rid="pone-0002631-g003">Figure 3B</xref>
shows the SCRs separately for all chords played with and without expression. The expressive chords elicited a more phasic SCR which was stronger than the SCR to non-expressive chords, the SCR amplitude being maximal at around 2.5 seconds.</p>
<fig id="pone-0002631-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0002631.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Skin conductance responses (SCRs).</title>
<p>A: Grand-average of SCRs elicited by expected, unexpected (original), and very unexpected chords (averaged across expressive and non-expressive conditions). Compared to expected chords, unexpected and very unexpected chords elicited clear SCRs. Notably, the SCR elicited by very unexpected chords was larger than the SCR to unexpected (original) chords, showing that the magnitude of SCRs is related to the degree of harmonic expectancy violation. B: Grand-average of SCRs elicited by expressive and non-expressive chords (averaged across expected, unexpected, and very unexpected conditions). Compared to non-expressive chords, chords played with musical expression elicited a clear SCR.</p>
</caption>
<graphic xlink:href="pone.0002631.g003"></graphic>
</fig>
<p>A global ANOVA with factors chord (expected, unexpected, very unexpected) and expression (expressive, non-expressive) for a time window ranging from 1.5 to 3.5 sec indicated an effect of expression (
<italic>F</italic>
(1,19) = 5.17,
<italic>p</italic>
<0.05, reflecting that expressive chords elicited a stronger SCR than non-expressive chords), a marginal effect of chord (
<italic>F</italic>
(2,38) = 2.53,
<italic>p</italic>
 = 0.09, reflecting that SCRs to very unexpected, unexpected, and expected chords differed from each other), but no two-way interaction (
<italic>p</italic>
 = .79). The effect of chord was significant (
<italic>F</italic>
(1,19) = 4.66,
<italic>p</italic>
<.05) when an analogous ANOVA was conducted for a longer time window (1.5 to 8 s, justified because the effect of chord was more tonic than the effect of expression, see
<xref ref-type="fig" rid="pone-0002631-g003">Figure 3A</xref>
vs. 3B). Two-tailed paired
<italic>t</italic>
-tests showed that this effect of chord was due to a significant difference in SCRs between very unexpected and expected chords (
<italic>p</italic>
<0.05), and between very unexpected and unexpected chords (
<italic>p</italic>
<.05). Although clearly visible in the waveforms, the difference in SCRs between unexpected and expected chords was statistically not significant (
<italic>p</italic>
 = .22).</p>
<p>Although the ANOVAs did not indicate an interaction between factors chord and expression, we also inspected the single subject data sets for differences in SCR effects between the expressive and the non-expressive conditions (to exclude that the large variance typical for SCR data rendered the results of the ANOVA spurious). In 16 (out of 20) participants the difference in SCRs between very unexpected and expected chords was larger for the expressive than for the non-expressive chords, but only 9 participants showed this effect of expression for the difference between unexpected (original) and expected chords. A Chi-Square test on the SCR effects of very unexpected chords (SCRs to expected chords subtracted from SCRs to very unexpected chords in the time window from 1.5 to 3.5 s), indicated that significantly more subjects displayed a larger SCR when chords were played with expression than when they were played without expression (
<italic>X</italic>
(1) = 7.2,
<italic>p</italic>
<0.01).</p>
</sec>
<sec id="s3c">
<title>Heart rate</title>
<p>There were no significant differences in the inter-heartbeat interval (IBI) following the presentation of the three types of chords (expected, unexpected, very unexpected) in any of the three time windows (0–2 sec:
<italic>p</italic>
>.7; 2–4 sec:
<italic>p</italic>
>.3; 4–6 sec:
<italic>p</italic>
>.3), and the IBIs were identical when calculated for entire expressive and non-expressive excerpts (0.92 sec in each condition).</p>
</sec>
<sec id="s3d">
<title>Electroencephalogram</title>
<sec id="s3d1">
<title>ERAN</title>
<p>Compared to the expected chords, both unexpected (original) and very unexpected chords elicited an ERAN (
<xref ref-type="fig" rid="pone-0002631-g004">Figure 4</xref>
). The peak latency of the ERAN elicited by unexpected chords (expected subtracted from unexpected chords) was 158 ms, and 180 ms for the ERAN elicited by very unexpected chords (expected subtracted from very unexpected chords). As expected, the ERAN elicited by very unexpected chords was nominally larger (−0.84 µV) than the ERAN elicited by unexpected chords (−0.74 µV), but this difference was statistically not significant (amplitude values were calculated for frontal ROIs in the time window from 140 to 220 ms as difference potentials, expected subtracted from [very] unexpected chords). Moreover, the ERAN was identical for the expressive and the non-expressive condition (−0.79 µV in each condition).</p>
<fig id="pone-0002631-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0002631.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Grand-average of brain electric responses to expected, unexpected (original), and very unexpected chords (averaged across expressive and non-expressive conditions).</title>
<p>Compared to expected chords, both unexpected and very unexpected chords elicited an ERAN and an N5. The insets in the two bottom panels show isopotential maps of the ERAN and the N5 effect (expected subtracted from [very] unexpected chords).</p>
</caption>
<graphic xlink:href="pone.0002631.g004"></graphic>
</fig>
<p>A global ANOVA with factors chord (expected, unexpected, very unexpected), expression (expressive, non-expressive), anterior- posterior distribution, and hemisphere for a time window from 140 to 220 ms indicated an interaction between factors chord, anterior-posterior and hemisphere (
<italic>F</italic>
(2,38) = 3.6,
<italic>p</italic>
<.05, reflecting that the ERAN amplitude was largest at right anterior leads). A follow-up ANOVA with factors chord and expression for the right anterior ROI indicated an effect of chord (
<italic>F</italic>
(2,38) = 4.01,
<italic>p</italic>
<.05; reflecting that ERPs elicited by expected, unexpected, and very unexpected chords differed from each other), but no effect of expression (
<italic>p</italic>
 = .26), and no two-way interaction (
<italic>p</italic>
 = .73). Paired
<italic>t</italic>
-tests for the anterior ROI indicated that both unexpected (compared to expected) and very unexpected (compared to expected) chords elicited an ERAN (both
<italic>p</italic>
<.05). A paired
<italic>t</italic>
-test for the anterior ROI comparing directly the ERP amplitudes of unexpected and very unexpected chords did not indicate a difference (
<italic>p</italic>
 = 0.36).</p>
</sec>
<sec id="s3d2">
<title>N5</title>
<p>In the ERPs of both unexpected and very unexpected chords, the ERAN was followed by a late negativity (the N5,
<xref ref-type="fig" rid="pone-0002631-g004">Figure 4</xref>
). Compared to expected chords, the N5 was nominally larger for very unexpected than for unexpected (original) chords, but this difference was statistically not significant. The amplitudes of the N5 effects (unexpected or very unexpected chords compared to expected chords) did not clearly differ between the expressive and the non-expressive condition. Interestingly, when comparing all expressive chords to all non-expressive chords, the N5 was larger for expressive chords (
<xref ref-type="fig" rid="pone-0002631-g005">Figure 5</xref>
).</p>
<fig id="pone-0002631-g005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0002631.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Grand-average of brain electric responses to expressive and non-expressive chords (averaged across expected, unexpected, and very unexpected conditions).</title>
<p>Expressive chords elicited a negative effect in the N100-range (being maximal at central electrodes), and an N5 that was larger than the N5 elicited by non-expressive chords. The bottom insets show isopotential maps of the N1 and N5 effect (non-expressive subtracted from expressive chords).</p>
</caption>
<graphic xlink:href="pone.0002631.g005"></graphic>
</fig>
<p>A global ANOVA with factors chord (expected, unexpected, very unexpected), expression, anterior-posterior distribution, and hemisphere for the time window from 500 to 580 ms indicated an interaction between chord and anterior-posterior (
<italic>F</italic>
(1,19) = 13.77,
<italic>p</italic>
<0.0001, reflecting that the N5 was larger over anterior than over posterior regions). The analogous ANOVA for anterior ROIs indicated significant effects of chord (
<italic>F</italic>
(2,38) = 2.96,
<italic>p</italic>
<0.05, tested one-sided according to our hypothesis) and of expression (
<italic>F</italic>
(1,19) = 7.27,
<italic>p</italic>
<0.02), but no interaction between factors chord and expression (
<italic>p</italic>
 = .17). Paired
<italic>t</italic>
-tests for the anterior ROIs indicated that both unexpected (compared to expected) and very unexpected (compared to expected) chords elicited significant effects (both
<italic>p</italic>
<.05). A paired
<italic>t</italic>
-test for the anterior ROI comparing directly the ERP amplitudes of unexpected and very unexpected chords did not indicate a difference (
<italic>p</italic>
 = 0.76).</p>
</sec>
<sec id="s3d3">
<title>P3a</title>
<p>Following the ERAN, the ERPs of both unexpected and very unexpected chords compared to expected chords were more positive around 300 ms, particularly over left anterior leads (see
<xref ref-type="fig" rid="pone-0002631-g004">Figure 4</xref>
). To test whether this difference reflects the elicitation of a P3a (see
<xref ref-type="sec" rid="s4">
<italic>Discussion</italic>
</xref>
for functional significance of the P3a), an ANOVA was conducted with factors chord (expected, unexpected, very unexpected), expression, and hemisphere for anterior ROIs and a time window from 250–350 ms. This ANOVA did not indicate an effect of chord (
<italic>p</italic>
 = .72), nor any no two- or three-way interaction, indicating that unexpected or very unexpected chords did not elicit significant P3a effects.</p>
</sec>
<sec id="s3d4">
<title>Expressive vs. non-expressive chords</title>
<p>
<xref ref-type="fig" rid="pone-0002631-g005">Figure 5</xref>
shows ERPs elicited by all chords in the expressive and the non-expressive condition. In addition to the larger N5 elicited by expressive compared to the non-expressive chords (see above), the expressive chords also elicited an increased negativity over central electrodes in the time window from 80 to 120 ms (this effect is presumably an increased N100 due to the fact that the critical chords were usually played more loudly in the expressive condition, see also
<xref ref-type="sec" rid="s4">
<italic>Discussion</italic>
</xref>
).</p>
<p>An ANOVA for the time-window from 80 to 120 ms with factors expression, anterior-posterior distribution, and hemisphere did not indicate an effect of expression (
<italic>p</italic>
 = .16). However, an ANOVA computed for central electrodes only (T7, T8, C3, C4) with factor expression (time window 80 to 120 ms) indicated a significant effect (
<italic>F</italic>
(1,19) = 4.70,
<italic>p</italic>
<.05).</p>
</sec>
</sec>
</sec>
<sec id="s4">
<title>Discussion</title>
<sec id="s4a">
<title>Electrodermal activity and heart rate</title>
<p>Compared to expected chords, very unexpected chords elicited a significant skin conductance response (SCR). A smaller SCR effect was clearly observable for unexpected (original) chords, although this effect was statistically not significant. These SCR data replicate findings from a previous study
<xref ref-type="bibr" rid="pone.0002631-Steinbeis1">[18]</xref>
, suggesting that unexpected harmonies elicit an emotional response, and that the strength of this response increases with increasing unexpectedness of a harmony. The notion that the SCRs to (very) unexpected chords reflect effects of emotional processing is also supported by the emotion ratings: These ratings showed that excerpts with expected chords differed from those with an unexpected, and even more so from those with a very unexpected chord in terms of emotional valence, arousal, and surprise. Our findings hence lend further support to the theory of Meyer
<xref ref-type="bibr" rid="pone.0002631-Meyer1">[25]</xref>
that violations of harmonic expectancy elicit emotional responses in listeners (such as tension-relaxation or surprise; see also
<xref ref-type="bibr" rid="pone.0002631-Juslin1">[26]</xref>
).</p>
<p>It is unlikely that the SCRs elicited by the (very) unexpected chords were simply due to attentional mechanisms or orienting reflexes which could have been triggered by these chords, because the ERP analysis showed that neither unexpected nor very unexpected chords elicited a significant P3a. Attentional mechanisms and orienting reflexes are usually reflected in P3a potentials (e.g,
<xref ref-type="bibr" rid="pone.0002631-Polich1">[31]</xref>
), even when subjects do not have to attend to a stimulus, or a stimulus dimension. If the significant SCR to very unexpected chords (as well as the SCR to unexpected chords) could be explained by attention-capturing mechanisms (or orienting reflexes) triggered by such chords, then a significant P3a should have been observable in the ERPs, which was not the case.</p>
<p>The SCRs to all chords played with expression were larger compared to the SCRs to all non-expressive chords. Note that, originally, the expressive chords were all slightly unexpected chords as arranged by the composer (and these original versions with unexpected chords were the ones played by the pianists). To increase the emotional response to such unexpected harmonies, performers use means of emotional expression in music, such as playing the notes with increased or decreased key-stroke velocity (e.g., an accent, or an unexpectedly soft timbre). Thus, when rendered expected, or very unexpected, the critical chords in the expressive conditions all differed in their key-stroke velocity compared to the preceding chords (most of them being played with an accent, i.e. with increased loudness), whereas all notes in the non-expressive condition were played with the same key-stroke velocity as all other chords. It is probable that this increased loudness of the expressive chords led to the increased SCR (as well as to the increased N100 amplitude to expressive chords). With regards to the SCR, an alternative explanation is that all chords in the expressive condition elicited stronger electrodermal activity as a function of being more emotionally expressive and eliciting the appropriate emotion-related response in the listener (consistent with the behavioural data showing that expressive excerpts were perceived as more arousing than non-expressive excerpts); this issue remains to be specified.</p>
<p>It is not possible that the SCRs to expressive chords were larger simply because of increased resources required to discern the timbre deviants, or because of increased expectancies for the timbre deviants during the expressive condition (which could have led to increased tonic levels of electrodermal activity): First, general differences in processing demands would have been reflected in the skin conductance level across entire excerpts (and thus not be visible in the SCRs to chords that were calculated using a baseline preceding the chords). Second, the behavioural data of the timbre detection task indicate that this task was comparably easy in both expressive and non-expressive conditions. Finally, the heart rate calculated for entire excerpts (reflecting the vagal tone during listening to the excerpts) was identical for expressive and non-expressive excerpts, rendering it unlikely that the task of detecting the timbre deviants was more engaging, or more attention-demanding, in the expressive condition.</p>
<p>Interestingly, the SCRs elicited by very unexpected chords (compared to expected chords) tended to be more pronounced when these chords were played with expression compared to when played without expression. That is, it appears that the emotional response elicited by a (very) unexpected harmony can be enhanced when perceived in a musical context played with emotional expression (using variations in loudness and tempo). Note that purely physical differences between expressive and non-expressive conditions cannot account for this effect, because we compared the difference in SCRs between expected and (very) unexpected harmonies, separately for the expressive, and for the non-expressive music.</p>
<p>No significant changes in the inter-heartbeat interval (IBI) for the three different types of chords were observed (as in a recent study by Steinbeis et al.
<xref ref-type="bibr" rid="pone.0002631-Steinbeis1">[18]</xref>
), and no differences in IBIs were measured between expressive and non-expressive chords. We surmise that the lack of IBI changes is due to the short duration of the events of interest (i.e., of the chords), and perhaps due to the relatively small differences in emotional valence between chords (although these differences were statistically significant in the behavioural data). Thus, SCRs (which differed between expected, unexpected, and very unexpected chords) appear to be more suitable to investigate sympathetic effects of emotional responses to harmonic irregularities in music.</p>
</sec>
<sec id="s4b">
<title>Electroencephalogram</title>
<sec id="s4b1">
<title>ERAN</title>
<p>Compared to the expected chords, unexpected (original) chords elicited an ERAN. This is the first evidence showing that unexpected chords, as arranged by a composer, and as played by pianists, elicit an ERAN. Previous studies have either used rather artificial musical stimuli (see
<xref ref-type="sec" rid="s1">Introduction</xref>
) to have the maximum control over the acoustic properties of the stimuli, or used original excerpts which were played by a computer
<xref ref-type="bibr" rid="pone.0002631-Steinbeis1">[18]</xref>
, or used excerpts played by a musician, but with unexpected musical events that were arranged by the experimenters (and not by the composer
<xref ref-type="bibr" rid="pone.0002631-Koelsch5">[15]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Besson1">[16]</xref>
). The present data, thus, show brain responses to authentic musical stimuli, i.e., to music as it was actually composed by a composer, and played by a pianist. The ERAN elicited by very unexpected chords was nominally larger than the ERAN elicited by unexpected chords (as hypothesized, although this difference was statistically not significant). This tentatively replicates results of previous studies
<xref ref-type="bibr" rid="pone.0002631-Koelsch1">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Steinbeis1">[18]</xref>
showing that the amplitude of the ERAN increases with increasing harmonic irregularity, and thus unexpectedness, of a chord.</p>
<p>Note that it is unlikely that the ERAN is simply an attention effect on the N100 component, because the ERAN latency was around 160–180 ms, which is well beyond the N100 latency. Also note that, due to its slightly larger magnitude, the ERAN elicited by the very unexpected chords extended into the P2-range, which is in agreement with a number of previous studies
<xref ref-type="bibr" rid="pone.0002631-Koelsch1">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Leino1">[20]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Loui1">[21]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Koelsch3">[13]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Koelsch7">[30]</xref>
.</p>
<p>Importantly, the ERAN did not differ between the expressive and the non-expressive condition. That is, the ERAN did not differ when elicited in a musical context played with emotional expression compared to when played without emotional expression. This indicates that music-syntactic processing (as indicated by the ERAN) does not interact with the emotional expression of a musical excerpt. That is, the generation of the ERAN appears to be independent of the increased emotional response elicited by an unexpected harmony played with emotional expression (as indicated by the SCRs and the behavioural data). This suggests that the neural mechanisms of music-syntactic processing operate independently of the emotional factors communicated by musical performance. Note that the outcome of music-syntactic processing (which leads to the perception of unexpectedness of a harmonically irregular chord) has clear effects on emotional processes (as shown by the SCRs to [very] unexpected chords, which were stronger than those to expected chords, see also above). The finding that the ERAN did not differ between the expressive and non-expressive condition justifies the use of non-expressive musical stimuli in other studies on music-syntactic processing in order to have the maximum acoustic control over the stimulus.</p>
</sec>
<sec id="s4b2">
<title>N5</title>
<p>In both the expected and the very unexpected condition, the ERAN was followed by a late negativity, the N5
<xref ref-type="bibr" rid="pone.0002631-Koelsch1">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Leino1">[20]</xref>
<xref ref-type="bibr" rid="pone.0002631-Miranda1">[23]</xref>
. The N5 is taken to reflect processing of harmonic integration which is at least partly related to the processing of musical meaning
<xref ref-type="bibr" rid="pone.0002631-Koelsch6">[24]</xref>
,
<xref ref-type="bibr" rid="pone.0002631-Koelsch1">[5]</xref>
. For example, Steinbeis and Koelsch
<xref ref-type="bibr" rid="pone.0002631-Steinbeis2">[32]</xref>
showed an interaction between the N5 and the N400 (elicited by semantic incongruities in language), suggesting that the integration of expected and unexpected events into a larger, meaningful musical context consumes partly resources that are also engaged in the processing of linguistic semantics.</p>
<p>The amplitude of the N5 effect (expected subtracted from [very] unexpected chords) did not clearly differ between the expressive and the non-expressive condition. However, the N5 potentials clearly differed between all expressive and all non-expressive chords (
<xref ref-type="fig" rid="pone-0002631-g005">Figure 5</xref>
). With regards to this, it is important to note that the N5 is not only elicited by harmonically irregular (unexpected) chords, but also by harmonically regular (expected) chords [5, 22; see also
<xref ref-type="fig" rid="pone-0002631-g004">Figure 4</xref>
]: Although irregular chords usually require a higher amount of harmonic integration (leading to a larger N5), regular chords also require integration into the harmonic context. Thus, both irregular (unexpected) and regular (expected) chords may elicit N5 potentials (the N5 elicited by unexpected chords usually being larger than the N5 evoked by expected ones, see
<xref ref-type="fig" rid="pone-0002631-g004">Figure 4</xref>
). The finding that the N5 (averaged across expected, unexpected, and very unexpected chords) was larger for chords played with emotional expression compared to chords played without such expression indicates that the N5 can be modulated by emotional expression. The reason for this modulation is presumably that chords played with expression contain more meaning information (because they contain information about the emotions and the intentions of the performer,
<xref ref-type="bibr" rid="pone.0002631-Juslin2">[27]</xref>
), resulting in larger N5 potentials for expressive than for non-expressive chords. Notably, no effect of expression was found in the ERAN time window. Thus, whereas the ERAN is not influenced by emotional expression of a musical excerpt, the neural mechanisms underlying the generation of the N5 can be influenced by emotional expression.</p>
</sec>
</sec>
<sec id="s4c">
<title>Conclusions</title>
<p>Our data show a number of physiological responses elicited by music played as naturally as one would encounter in real-life listening situations. Unexpected harmonies (as actually arranged by composers) elicited both ERAN and N5 potentials. A number of previous studies used musical sequences that were played without expression under computerized control, raising the question whether ERAN and N5 are simply experimental artefacts. The data from the expressive condition show that the neural mechanisms underlying the generation of both ERAN and N5 are also involved in the processing of real naturalistic music. The SCRs elicited by very unexpected chords (compared to expected chords), tended to be influenced by the emotional expression of the musical performances, suggesting that emotional effects of the processing of unexpected chords are slightly stronger when elicited by expressive music. By contrast, the ERAN was not modulated by emotional expression. This suggests that the neural mechanisms of music-syntactic processing operate independently of the emotional factors communicated by musical performance, and that the ERAN is, thus, a cognitive process which is independent of the emotional qualities of a stimulus. This justifies the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Notably, the outcome of music-syntactic processing (as reflected in the ERAN) leads to the perception of unexpectedness of a harmonically irregular chord, and has clear effects on emotional processes as reflected by the SCRs to unexpected chords. The N5 elicited by chords in general (regardless of their chord function) was modulated by emotional expression, presumably because chords played with expression contain additional meaning information about the emotions and the intentions of the performer. Thus, our data also indicate that musical expression affects the neural mechanisms underlying harmonic integration and processing of musical meaning.</p>
<p>Thus, the present results suggest that (a) the neural mechanisms underlying the generation of both ERAN and N5 are involved in the processing of real naturalistic music, (b) music-syntactic processing as reflected in the ERAN is a cognitive process which is independent of the emotional qualities of a stimulus, (c) emotional effects of the processing of unexpected chords are slightly stronger when elicited by expressive music, and (d) that musical expression affects the neural mechanisms underlying harmonic integration and processing of musical meaning.</p>
</sec>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="pone.0002631-Blood1">
<label>1</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blood</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion.</article-title>
<source>PNAS</source>
<volume>98</volume>
<fpage>11818</fpage>
<lpage>11823</lpage>
<pub-id pub-id-type="pmid">11573015</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Brown1">
<label>2</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brown</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Martinez</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Parsons</surname>
<given-names>LM</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Passive music listening spontaneously engages limbic and paralimbic systems.</article-title>
<source>Neuroreport</source>
<volume>15</volume>
<fpage>2033</fpage>
<lpage>2037</lpage>
<pub-id pub-id-type="pmid">15486477</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Sammler1">
<label>3</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sammler</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Grigutsch</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Fritz</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Music and emotion: Electrophysiological correlates of the processing of pleasant and unpleasant music.</article-title>
<source>Psychophysiology</source>
<volume>44</volume>
<fpage>293</fpage>
<lpage>304</lpage>
<pub-id pub-id-type="pmid">17343712</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Janata1">
<label>4</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Janata</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>ERP measures essay the degree of expectancy violation of harmonic contexts in music.</article-title>
<source>J Cog Neurosci</source>
<volume>7</volume>
<fpage>153</fpage>
<lpage>164</lpage>
</citation>
</ref>
<ref id="pone.0002631-Koelsch1">
<label>5</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Gunter</surname>
<given-names>TC</given-names>
</name>
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
<name>
<surname>Schröger</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Brain Indices of Music Processing: ‘Non-musicians’ are musical.</article-title>
<source>J Cog Neurosci</source>
<volume>12</volume>
<fpage>520</fpage>
<lpage>541</lpage>
</citation>
</ref>
<ref id="pone.0002631-Regnault1">
<label>6</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Regnault</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Different brain mechanisms mediate sensitivity to sensory consonance and harmonic context: Evidence from auditory event-related brain potentials.</article-title>
<source>J Cogn Neurosci</source>
<volume>13(2)</volume>
<fpage>241</fpage>
<lpage>255</lpage>
<pub-id pub-id-type="pmid">11244549</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Tillmann1">
<label>7</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Bharucha</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Effect of harmonic relatedness on the detection of temporal asynchronies.</article-title>
<source>Percept Psychophys</source>
<volume>64(4)</volume>
<fpage>640</fpage>
<lpage>649</lpage>
<pub-id pub-id-type="pmid">12132764</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Bigand1">
<label>8</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Poulin</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Tillmann</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Madurell</surname>
<given-names>F</given-names>
</name>
<name>
<surname>D'Adamo</surname>
<given-names>DA</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Sensory versus cognitive components in harmonic priming.</article-title>
<source>J Exp Psych: Human Perc Perf</source>
<volume>29</volume>
<fpage>159</fpage>
<lpage>171</lpage>
</citation>
</ref>
<ref id="pone.0002631-Heinke1">
<label>9</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Heinke</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Kenntner</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Gunter</surname>
<given-names>TC</given-names>
</name>
<name>
<surname>Sammler</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Olthoff</surname>
<given-names>D</given-names>
</name>
<etal></etal>
</person-group>
<year>2004</year>
<article-title>Differential Effects of Increasing Propofol Sedation on Frontal and Temporal Cortices: An ERP study.</article-title>
<source>Anesthesiology</source>
<volume>100</volume>
<fpage>617</fpage>
<lpage>625</lpage>
<pub-id pub-id-type="pmid">15108977</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Koelsch2">
<label>10</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Gunter</surname>
<given-names>TC</given-names>
</name>
<name>
<surname>Wittfoth</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sammler</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Interaction between Syntax Processing in Language and in Music: An ERP Study.</article-title>
<source>J Cog Neurosci</source>
<volume>17</volume>
<fpage>1565</fpage>
<lpage>1579</lpage>
</citation>
</ref>
<ref id="pone.0002631-Tillmann2">
<label>11</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tillmann</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Escoffier</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Lalitte</surname>
<given-names>P</given-names>
</name>
<etal></etal>
</person-group>
<year>2006</year>
<article-title>Cognitive priming in sung and instrumental music: activation of inferior frontal cortex.</article-title>
<source>Neuroimage</source>
<volume>31(4)</volume>
<fpage>1771</fpage>
<lpage>1782</lpage>
<pub-id pub-id-type="pmid">16624581</pub-id>
</citation>
</ref>
<ref id="pone.0002631-PoulinCharronnat1">
<label>12</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Poulin-Charronnat</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Processing of musical syntax tonic versus subdominant: an event-related potential study.</article-title>
<source>J Cogn Neurosci</source>
<volume>18(9)</volume>
<fpage>1545</fpage>
<lpage>1554</lpage>
<pub-id pub-id-type="pmid">16989554</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Koelsch3">
<label>13</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Jentschke</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Sammler</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Mietchen</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Untangling syntactic and sensory processing: An ERP study of music perception.</article-title>
<source>Psychophysiology</source>
<volume>44</volume>
<fpage>476</fpage>
<lpage>490</lpage>
<pub-id pub-id-type="pmid">17433099</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Koelsch4">
<label>14</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Jentschke</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Short-term effects of processing musical syntax: An ERP study.</article-title>
<source>Brain Research,</source>
<comment>in press</comment>
</citation>
</ref>
<ref id="pone.0002631-Koelsch5">
<label>15</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Mulder</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Electric brain responses to inappropriate harmonies during listening to expressive music.</article-title>
<source>Clinical Neurophysiology</source>
<volume>113</volume>
<fpage>862</fpage>
<lpage>869</lpage>
<pub-id pub-id-type="pmid">12048045</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Besson1">
<label>16</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Faïta</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Peretz</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Bonnel</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Requin</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Singing in the brain: Independence of Lyrics and Tunes.</article-title>
<source>Psychological Science</source>
<volume>9(6)</volume>
<fpage>494</fpage>
<lpage>498</lpage>
</citation>
</ref>
<ref id="pone.0002631-Besson2">
<label>17</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Faïta</surname>
<given-names>F</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>An event-related potential (ERP) study of musical expectancy: Comparison of musicians with nonmusicians.</article-title>
<source>J Exp Psy: Human Perc Perf</source>
<volume>21(6)</volume>
<fpage>1278</fpage>
<lpage>1296</lpage>
</citation>
</ref>
<ref id="pone.0002631-Steinbeis1">
<label>18</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Steinbeis</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Sloboda</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>The role of harmonic expectancy violations in musical emotions: Evidence from subjective, physiological, and neural responses.</article-title>
<source>J Cog Neurosci</source>
<volume>18</volume>
<fpage>1380</fpage>
<lpage>1393</lpage>
</citation>
</ref>
<ref id="pone.0002631-Patel1">
<label>19</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patel</surname>
<given-names>AD</given-names>
</name>
<name>
<surname>Gibson</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Ratner</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Holcomb</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Processing syntactic relations in language and music: An event-related potential study.</article-title>
<source>J Cog Neurosci</source>
<volume>10 (6)</volume>
<fpage>717</fpage>
<lpage>733</lpage>
</citation>
</ref>
<ref id="pone.0002631-Leino1">
<label>20</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Leino</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Brattico</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Tervaniemi</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Vuust</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Representation of harmony rules in the human brain: further evidence from event-related potentials.</article-title>
<source>Brain Res</source>
<volume>1142</volume>
<fpage>169</fpage>
<lpage>177</lpage>
<pub-id pub-id-type="pmid">17300763</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Loui1">
<label>21</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Loui</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Grent-'t Jong</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Torpey</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Woldorff</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Effects of attention on the neural processing of harmonic syntax in Western music.</article-title>
<source>Cog Brain Res</source>
<volume>25</volume>
<fpage>589</fpage>
<lpage>998</lpage>
</citation>
</ref>
<ref id="pone.0002631-Schn1">
<label>22</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schön</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Besson</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Visually induced auditory expectancy in music reading: a behavioural and electrophysiological study.</article-title>
<source>J Cog Neurosci</source>
<volume>17(4)</volume>
<fpage>694</fpage>
<lpage>705</lpage>
</citation>
</ref>
<ref id="pone.0002631-Miranda1">
<label>23</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miranda</surname>
<given-names>RA</given-names>
</name>
<name>
<surname>Ullman</surname>
<given-names>MT</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Double dissociation between rules and memory in music: An event-related potential study.</article-title>
<source>NeuroImage</source>
<volume>38</volume>
<fpage>331</fpage>
<lpage>345</lpage>
<pub-id pub-id-type="pmid">17855126</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Koelsch6">
<label>24</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Neural substrates of processing syntax and semantics in music.</article-title>
<source>Curr Op Neurobio</source>
<volume>15</volume>
<fpage>1</fpage>
<lpage>6</lpage>
</citation>
</ref>
<ref id="pone.0002631-Meyer1">
<label>25</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>LB</given-names>
</name>
</person-group>
<year>1956</year>
<source>Emotion and Meaning in Music</source>
<publisher-loc>Chicago</publisher-loc>
<publisher-name>University of Chicago Press</publisher-name>
</citation>
</ref>
<ref id="pone.0002631-Juslin1">
<label>26</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Juslin</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Västfjäll</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Emotional responses to music: The need to consider underlying mechanisms.</article-title>
<source>Behavioural and Brain Sciences.</source>
<comment>in press</comment>
</citation>
</ref>
<ref id="pone.0002631-Juslin2">
<label>27</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Juslin</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Communicating emotion in music performance: A review and theoretical framework.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Juslin</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Sloboda</surname>
<given-names>JA</given-names>
</name>
</person-group>
<source>Music and emotion</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>Oxford University Press</publisher-name>
<fpage>309</fpage>
<lpage>337</lpage>
</citation>
</ref>
<ref id="pone.0002631-Oldfield1">
<label>28</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oldfield</surname>
<given-names>RC</given-names>
</name>
</person-group>
<year>1971</year>
<article-title>The assessment and analysis of handedness: the Edinburgh inventory.</article-title>
<source>Neuropsychologia</source>
<volume>9(1)</volume>
<fpage>97</fpage>
<lpage>113</lpage>
<pub-id pub-id-type="pmid">5146491</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Bradley1">
<label>29</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bradley</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Lang</surname>
<given-names>PJ</given-names>
</name>
</person-group>
<year>1994</year>
<article-title>Measuring emotion: The Self-Assessment Manikin and the semantic differential.</article-title>
<source>J Beh Ther Exp Psychiatry</source>
<volume>25</volume>
<fpage>49</fpage>
<lpage>59</lpage>
</citation>
</ref>
<ref id="pone.0002631-Koelsch7">
<label>30</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Music-syntactic Processing and Auditory Memory – Similarities and Differences between ERAN and MMN.</article-title>
<source>Psychophysiology,</source>
<comment>in press</comment>
</citation>
</ref>
<ref id="pone.0002631-Polich1">
<label>31</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Polich</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Updating P300: An integrative theory of P3a and P3b.</article-title>
<source>Clinical Neurophysiology</source>
<volume>118(10)</volume>
<fpage>2128</fpage>
<lpage>2148</lpage>
<pub-id pub-id-type="pmid">17573239</pub-id>
</citation>
</ref>
<ref id="pone.0002631-Steinbeis2">
<label>32</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Steinbeis</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Koelsch</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns.</article-title>
<source>Cerebral Cortex</source>
<volume>18(5)</volume>
<fpage>1169</fpage>
<lpage>78</lpage>
<pub-id pub-id-type="pmid">17720685</pub-id>
</citation>
</ref>
</ref-list>
<fn-group>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>
<bold>Funding: </bold>
The study was supported in part by the German Research Foundation (Deutsche Forschungsgemeinschaft) through grant KO 2266/2-1. The Deutsche Forschungsgemeinschaft played no role in the design and conduct of the study, in the collection, analysis, and interpretation of the data, and in the preparation, review, or approval of the manuscript.</p>
</fn>
</fn-group>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Royaume-Uni</li>
</country>
<region>
<li>Angleterre</li>
<li>Sussex de l'Est</li>
</region>
<settlement>
<li>Brighton</li>
<li>Falmer</li>
</settlement>
<orgName>
<li>Université du Sussex</li>
</orgName>
</list>
<tree>
<noCountry>
<name sortKey="Kilches, Simone" sort="Kilches, Simone" uniqKey="Kilches S" first="Simone" last="Kilches">Simone Kilches</name>
<name sortKey="Schelinski, Stefanie" sort="Schelinski, Stefanie" uniqKey="Schelinski S" first="Stefanie" last="Schelinski">Stefanie Schelinski</name>
<name sortKey="Steinbeis, Nikolaus" sort="Steinbeis, Nikolaus" uniqKey="Steinbeis N" first="Nikolaus" last="Steinbeis">Nikolaus Steinbeis</name>
</noCountry>
<country name="Royaume-Uni">
<region name="Angleterre">
<name sortKey="Koelsch, Stefan" sort="Koelsch, Stefan" uniqKey="Koelsch S" first="Stefan" last="Koelsch">Stefan Koelsch</name>
</region>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/OperaV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000506 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 000506 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    OperaV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:2435625
   |texte=   Effects of Unexpected Chords and of Performer's Expression on Brain Responses and Electrodermal Activity
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:18612459" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a OperaV1 

Wicri

This area was generated with Dilib version V0.6.21.
Data generation: Thu Apr 14 14:59:05 2016. Site generation: Thu Jan 4 23:09:23 2024