Serveur d'exploration Xenakis

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

What Constitutes a Phrase in Sound-Based Music? A Mixed-Methods Investigation of Perception and Acoustics

Identifieur interne : 000011 ( Pmc/Corpus ); précédent : 000010; suivant : 000012

What Constitutes a Phrase in Sound-Based Music? A Mixed-Methods Investigation of Perception and Acoustics

Auteurs : Kirk N. Olsen ; Roger T. Dean ; Yvonne Leung

Source :

RBID : PMC:5172564

Abstract

Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous sounds, rather than instrumental discontinuous notes. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event ‘hazard’ analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral measures linked to timbre strengthened the models. Overall, results show that even when little of the pitch and rhythm information important for phrasing in note-based music is available, phrasing is still perceived, primarily in response to changes of intensity and timbre. Implications for electroacoustic music composition and music recommender systems are discussed.


Url:
DOI: 10.1371/journal.pone.0167643
PubMed: 27997625
PubMed Central: 5172564

Links to Exploration step

PMC:5172564

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">What Constitutes a Phrase in Sound-Based Music? A Mixed-Methods Investigation of Perception and Acoustics</title>
<author>
<name sortKey="Olsen, Kirk N" sort="Olsen, Kirk N" uniqKey="Olsen K" first="Kirk N." last="Olsen">Kirk N. Olsen</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff002">
<addr-line>Department of Psychology, Macquarie University, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Dean, Roger T" sort="Dean, Roger T" uniqKey="Dean R" first="Roger T." last="Dean">Roger T. Dean</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Leung, Yvonne" sort="Leung, Yvonne" uniqKey="Leung Y" first="Yvonne" last="Leung">Yvonne Leung</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">27997625</idno>
<idno type="pmc">5172564</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5172564</idno>
<idno type="RBID">PMC:5172564</idno>
<idno type="doi">10.1371/journal.pone.0167643</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000011</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000011</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">What Constitutes a Phrase in Sound-Based Music? A Mixed-Methods Investigation of Perception and Acoustics</title>
<author>
<name sortKey="Olsen, Kirk N" sort="Olsen, Kirk N" uniqKey="Olsen K" first="Kirk N." last="Olsen">Kirk N. Olsen</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff002">
<addr-line>Department of Psychology, Macquarie University, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Dean, Roger T" sort="Dean, Roger T" uniqKey="Dean R" first="Roger T." last="Dean">Roger T. Dean</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Leung, Yvonne" sort="Leung, Yvonne" uniqKey="Leung Y" first="Yvonne" last="Leung">Yvonne Leung</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous
<italic>sounds</italic>
, rather than instrumental discontinuous
<italic>notes</italic>
. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event ‘hazard’ analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral measures linked to timbre strengthened the models. Overall, results show that even when little of the pitch and rhythm information important for phrasing in note-based music is available, phrasing is still perceived, primarily in response to changes of intensity and timbre. Implications for electroacoustic music composition and music recommender systems are discussed.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Knosche, Tr" uniqKey="Knosche T">TR Knösche</name>
</author>
<author>
<name sortKey="Neuhaus, C" uniqKey="Neuhaus C">C Neuhaus</name>
</author>
<author>
<name sortKey="Haueisen, J" uniqKey="Haueisen J">J Haueisen</name>
</author>
<author>
<name sortKey="Alter, K" uniqKey="Alter K">K Alter</name>
</author>
<author>
<name sortKey="Maess, B" uniqKey="Maess B">B Maess</name>
</author>
<author>
<name sortKey="Witte, Ow" uniqKey="Witte O">OW Witte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bregman, As" uniqKey="Bregman A">AS Bregman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cutler, A" uniqKey="Cutler A">A Cutler</name>
</author>
<author>
<name sortKey="Altmann, Gtm" uniqKey="Altmann G">GTM Altmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Streeter, La" uniqKey="Streeter L">LA Streeter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jusczyk, Pw" uniqKey="Jusczyk P">PW Jusczyk</name>
</author>
<author>
<name sortKey="Hirsh Pasek, K" uniqKey="Hirsh Pasek K">K Hirsh-Pasek</name>
</author>
<author>
<name sortKey="Kemler Nelson, Dg" uniqKey="Kemler Nelson D">DG Kemler Nelson</name>
</author>
<author>
<name sortKey="Kennedy, Lj" uniqKey="Kennedy L">LJ Kennedy</name>
</author>
<author>
<name sortKey="Woodward, A" uniqKey="Woodward A">A Woodward</name>
</author>
<author>
<name sortKey="Piwoz, J" uniqKey="Piwoz J">J Piwoz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palmer, C" uniqKey="Palmer C">C Palmer</name>
</author>
<author>
<name sortKey="Krumhansl, Cl" uniqKey="Krumhansl C">CL Krumhansl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palmer, C" uniqKey="Palmer C">C Palmer</name>
</author>
<author>
<name sortKey="Krumhansl, Cl" uniqKey="Krumhansl C">CL Krumhansl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deutsch, D" uniqKey="Deutsch D">D Deutsch</name>
</author>
<author>
<name sortKey="Feroe, J" uniqKey="Feroe J">J Feroe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stoffer, Th" uniqKey="Stoffer T">TH Stoffer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Krumhansl, Cl" uniqKey="Krumhansl C">CL Krumhansl</name>
</author>
<author>
<name sortKey="Jusczyk, Pw" uniqKey="Jusczyk P">PW Jusczyk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landy, L" uniqKey="Landy L">L Landy</name>
</author>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clarke, Ef" uniqKey="Clarke E">EF Clarke</name>
</author>
<author>
<name sortKey="Krumhansl, Cl" uniqKey="Krumhansl C">CL Krumhansl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Balkwill, L L" uniqKey="Balkwill L">L-L Balkwill</name>
</author>
<author>
<name sortKey="Thompson, Wf" uniqKey="Thompson W">WF Thompson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Balkwill, L L" uniqKey="Balkwill L">L-L Balkwill</name>
</author>
<author>
<name sortKey="Thompson, Wf" uniqKey="Thompson W">WF Thompson</name>
</author>
<author>
<name sortKey="Matsunaga, R" uniqKey="Matsunaga R">R Matsunaga</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Olsen, Kn" uniqKey="Olsen K">KN Olsen</name>
</author>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
<author>
<name sortKey="Stevens, Cj" uniqKey="Stevens C">CJ Stevens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Olsen, Kn" uniqKey="Olsen K">KN Olsen</name>
</author>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
<author>
<name sortKey="Stevens, Cj" uniqKey="Stevens C">CJ Stevens</name>
</author>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bregman, As" uniqKey="Bregman A">AS Bregman</name>
</author>
<author>
<name sortKey="Dannenbring, Gl" uniqKey="Dannenbring G">GL Dannenbring</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Iverson, P" uniqKey="Iverson P">P Iverson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sandell, Gj" uniqKey="Sandell G">GJ Sandell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Olsen, Kn" uniqKey="Olsen K">KN Olsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Olsen, Kn" uniqKey="Olsen K">KN Olsen</name>
</author>
<author>
<name sortKey="Stevens, Cj" uniqKey="Stevens C">CJ Stevens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jesteadt, W" uniqKey="Jesteadt W">W Jesteadt</name>
</author>
<author>
<name sortKey="Bacon, Sp" uniqKey="Bacon S">SP Bacon</name>
</author>
<author>
<name sortKey="Lehman, Jr" uniqKey="Lehman J">JR Lehman</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klien, V" uniqKey="Klien V">V Klien</name>
</author>
<author>
<name sortKey="Grill, T" uniqKey="Grill T">T Grill</name>
</author>
<author>
<name sortKey="Flexer, A" uniqKey="Flexer A">A Flexer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Siedenburg, K" uniqKey="Siedenburg K">K Siedenburg</name>
</author>
<author>
<name sortKey="Fujinaga, I" uniqKey="Fujinaga I">I Fujinaga</name>
</author>
<author>
<name sortKey="Mcadams, S" uniqKey="Mcadams S">S McAdams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cutler, A" uniqKey="Cutler A">A Cutler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patel, Ad" uniqKey="Patel A">AD Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuhl, Pk" uniqKey="Kuhl P">PK Kuhl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhao, Tc" uniqKey="Zhao T">TC Zhao</name>
</author>
<author>
<name sortKey="Kuhl, Pk" uniqKey="Kuhl P">PK Kuhl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pe A, M" uniqKey="Pe A M">M Peña</name>
</author>
<author>
<name sortKey="Bonatti, Ll" uniqKey="Bonatti L">LL Bonatti</name>
</author>
<author>
<name sortKey="Nespor, M" uniqKey="Nespor M">M Nespor</name>
</author>
<author>
<name sortKey="Mehler, J" uniqKey="Mehler J">J Mehler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnson, Ek" uniqKey="Johnson E">EK Johnson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seidl, A" uniqKey="Seidl A">A Seidl</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Cheveigne, A" uniqKey="De Cheveigne A">A de Cheveigné</name>
</author>
<author>
<name sortKey="Plack, Cj" uniqKey="Plack C">CJ Plack</name>
</author>
<author>
<name sortKey="Fay, Rr" uniqKey="Fay R">RR Fay</name>
</author>
<author>
<name sortKey="Oxenham, Aj" uniqKey="Oxenham A">AJ Oxenham</name>
</author>
<author>
<name sortKey="Popper, An" uniqKey="Popper A">AN Popper</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saffran, Jr" uniqKey="Saffran J">JR Saffran</name>
</author>
<author>
<name sortKey="Aslin, Rn" uniqKey="Aslin R">RN Aslin</name>
</author>
<author>
<name sortKey="Newport, El" uniqKey="Newport E">EL Newport</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frost, Rl" uniqKey="Frost R">RL Frost</name>
</author>
<author>
<name sortKey="Monaghan, P" uniqKey="Monaghan P">P Monaghan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcnealy, K" uniqKey="Mcnealy K">K McNealy</name>
</author>
<author>
<name sortKey="Mazziotta, Jc" uniqKey="Mazziotta J">JC Mazziotta</name>
</author>
<author>
<name sortKey="Dapretto, M" uniqKey="Dapretto M">M Dapretto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palmer, Sd" uniqKey="Palmer S">SD Palmer</name>
</author>
<author>
<name sortKey="Mattys, Sl" uniqKey="Mattys S">SL Mattys</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hawthorne, K" uniqKey="Hawthorne K">K Hawthorne</name>
</author>
<author>
<name sortKey="Rudat, L" uniqKey="Rudat L">L Rudat</name>
</author>
<author>
<name sortKey="Gerken, L" uniqKey="Gerken L">L Gerken</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="M Nnel, C" uniqKey="M Nnel C">C Männel</name>
</author>
<author>
<name sortKey="Friederici, Ad" uniqKey="Friederici A">AD Friederici</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seidl, A" uniqKey="Seidl A">A Seidl</name>
</author>
<author>
<name sortKey="Johnson, Ek" uniqKey="Johnson E">EK Johnson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcadams, S" uniqKey="Mcadams S">S McAdams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcadams, S" uniqKey="Mcadams S">S McAdams</name>
</author>
<author>
<name sortKey="Winsberg, S" uniqKey="Winsberg S">S Winsberg</name>
</author>
<author>
<name sortKey="Donnadieu, S" uniqKey="Donnadieu S">S Donnadieu</name>
</author>
<author>
<name sortKey="De Soete, G" uniqKey="De Soete G">G De Soete</name>
</author>
<author>
<name sortKey="Krimphoff, J" uniqKey="Krimphoff J">J Krimphoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peeters, G" uniqKey="Peeters G">G Peeters</name>
</author>
<author>
<name sortKey="Giordano, Bl" uniqKey="Giordano B">BL Giordano</name>
</author>
<author>
<name sortKey="Susini, P" uniqKey="Susini P">P Susini</name>
</author>
<author>
<name sortKey="Misdariis, N" uniqKey="Misdariis N">N Misdariis</name>
</author>
<author>
<name sortKey="Mcadams, S" uniqKey="Mcadams S">S McAdams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
<author>
<name sortKey="Schubert, E" uniqKey="Schubert E">E Schubert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jahnke, Jc" uniqKey="Jahnke J">JC Jahnke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bao, Y" uniqKey="Bao Y">Y Bao</name>
</author>
<author>
<name sortKey="Dai, H" uniqKey="Dai H">H Dai</name>
</author>
<author>
<name sortKey="Wang, T" uniqKey="Wang T">T Wang</name>
</author>
<author>
<name sortKey="Chuang, S K" uniqKey="Chuang S">S-K Chuang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cox, Dr" uniqKey="Cox D">DR Cox</name>
</author>
<author>
<name sortKey="Oakes, D" uniqKey="Oakes D">D Oakes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bosch, Lt" uniqKey="Bosch L">Lt Bosch</name>
</author>
<author>
<name sortKey="Oostdijk, N" uniqKey="Oostdijk N">N Oostdijk</name>
</author>
<author>
<name sortKey="Boves, L" uniqKey="Boves L">L Boves</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gingras, B" uniqKey="Gingras B">B Gingras</name>
</author>
<author>
<name sortKey="Pearce, Mt" uniqKey="Pearce M">MT Pearce</name>
</author>
<author>
<name sortKey="Goodchild, M" uniqKey="Goodchild M">M Goodchild</name>
</author>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
<author>
<name sortKey="Wiggins, G" uniqKey="Wiggins G">G Wiggins</name>
</author>
<author>
<name sortKey="Mcadams, S" uniqKey="Mcadams S">S McAdams</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gaver, Ww" uniqKey="Gaver W">WW Gaver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gaver, Ww" uniqKey="Gaver W">WW Gaver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neuhoff, Jg" uniqKey="Neuhoff J">JG Neuhoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Piantadosi, St" uniqKey="Piantadosi S">ST Piantadosi</name>
</author>
<author>
<name sortKey="Tily, H" uniqKey="Tily H">H Tily</name>
</author>
<author>
<name sortKey="Gibson, E" uniqKey="Gibson E">E Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Misdariis, N" uniqKey="Misdariis N">N Misdariis</name>
</author>
<author>
<name sortKey="Minard, A" uniqKey="Minard A">A Minard</name>
</author>
<author>
<name sortKey="Susini, P" uniqKey="Susini P">P Susini</name>
</author>
<author>
<name sortKey="Lemaitre, G" uniqKey="Lemaitre G">G Lemaitre</name>
</author>
<author>
<name sortKey="Mcadams, S" uniqKey="Mcadams S">S McAdams</name>
</author>
<author>
<name sortKey="Parizet, E" uniqKey="Parizet E">E Parizet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landy, L" uniqKey="Landy L">L Landy</name>
</author>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Galantucci, B" uniqKey="Galantucci B">B Galantucci</name>
</author>
<author>
<name sortKey="Fowler, Ca" uniqKey="Fowler C">CA Fowler</name>
</author>
<author>
<name sortKey="Turvey, Mt" uniqKey="Turvey M">MT Turvey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Massaro, Dw" uniqKey="Massaro D">DW Massaro</name>
</author>
<author>
<name sortKey="Chen, Th" uniqKey="Chen T">TH Chen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="God Y, Ri" uniqKey="God Y R">RI Godøy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="God Y, Ri" uniqKey="God Y R">RI Godøy</name>
</author>
<author>
<name sortKey="Jensenius, Ar" uniqKey="Jensenius A">AR Jensenius</name>
</author>
<author>
<name sortKey="Nymoen, K" uniqKey="Nymoen K">K Nymoen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grossberg, S" uniqKey="Grossberg S">S Grossberg</name>
</author>
<author>
<name sortKey="Myers, Cw" uniqKey="Myers C">CW Myers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
<author>
<name sortKey="Pearce, Mt" uniqKey="Pearce M">MT Pearce</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="God Y, Ri" uniqKey="God Y R">RI Godøy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dean, Rt" uniqKey="Dean R">RT Dean</name>
</author>
<author>
<name sortKey="Bailes, F" uniqKey="Bailes F">F Bailes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hutchison, Jl" uniqKey="Hutchison J">JL Hutchison</name>
</author>
<author>
<name sortKey="Hubbard, Tl" uniqKey="Hubbard T">TL Hubbard</name>
</author>
<author>
<name sortKey="Hubbard, Na" uniqKey="Hubbard N">NA Hubbard</name>
</author>
<author>
<name sortKey="Brigante, R" uniqKey="Brigante R">R Brigante</name>
</author>
<author>
<name sortKey="Rypma, B" uniqKey="Rypma B">B Rypma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cunillera, T" uniqKey="Cunillera T">T Cunillera</name>
</author>
<author>
<name sortKey="Laine, M" uniqKey="Laine M">M Laine</name>
</author>
<author>
<name sortKey="Antoni, R F" uniqKey="Antoni R">R-F Antoni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Caclin, A" uniqKey="Caclin A">A Caclin</name>
</author>
<author>
<name sortKey="Mcadams, S" uniqKey="Mcadams S">S McAdams</name>
</author>
<author>
<name sortKey="Smith, Bk" uniqKey="Smith B">BK Smith</name>
</author>
<author>
<name sortKey="Winsberg, S" uniqKey="Winsberg S">S Winsberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Olsen, Kn" uniqKey="Olsen K">KN Olsen</name>
</author>
<author>
<name sortKey="Stevens, Cj" uniqKey="Stevens C">CJ Stevens</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, CA USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">27997625</article-id>
<article-id pub-id-type="pmc">5172564</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0167643</article-id>
<article-id pub-id-type="publisher-id">PONE-D-16-27259</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Music Cognition</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Music Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Physical Sciences</subject>
<subj-group>
<subject>Physics</subject>
<subj-group>
<subject>Acoustics</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Linguistics</subject>
<subj-group>
<subject>Speech</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Hearing</subject>
<subj-group>
<subject>Pitch Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Hearing</subject>
<subj-group>
<subject>Pitch Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Hearing</subject>
<subj-group>
<subject>Pitch Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Language</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Language</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Language</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Linguistics</subject>
<subj-group>
<subject>Language Acquisition</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Organisms</subject>
<subj-group>
<subject>Animals</subject>
<subj-group>
<subject>Vertebrates</subject>
<subj-group>
<subject>Amniotes</subject>
<subj-group>
<subject>Birds</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>What Constitutes a Phrase in Sound-Based Music? A Mixed-Methods Investigation of Perception and Acoustics</article-title>
<alt-title alt-title-type="running-head">Phrasing in Sound-Based Music</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Olsen</surname>
<given-names>Kirk N.</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Dean</surname>
<given-names>Roger T.</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Leung</surname>
<given-names>Yvonne</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff001">
<label>1</label>
<addr-line>The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, New South Wales, Australia</addr-line>
</aff>
<aff id="aff002">
<label>2</label>
<addr-line>Department of Psychology, Macquarie University, Sydney, New South Wales, Australia</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Jaencke</surname>
<given-names>Lutz</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>University of Zurich, SWITZERLAND</addr-line>
</aff>
<author-notes>
<fn fn-type="COI-statement" id="coi001">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con">
<p>
<list list-type="simple">
<list-item>
<p>
<bold>Conceptualization:</bold>
KO RD.</p>
</list-item>
<list-item>
<p>
<bold>Data curation:</bold>
KO.</p>
</list-item>
<list-item>
<p>
<bold>Formal analysis:</bold>
KO RD YL.</p>
</list-item>
<list-item>
<p>
<bold>Funding acquisition:</bold>
RD.</p>
</list-item>
<list-item>
<p>
<bold>Investigation:</bold>
KO.</p>
</list-item>
<list-item>
<p>
<bold>Methodology:</bold>
KO RD.</p>
</list-item>
<list-item>
<p>
<bold>Project administration:</bold>
KO RD.</p>
</list-item>
<list-item>
<p>
<bold>Resources:</bold>
KO RD.</p>
</list-item>
<list-item>
<p>
<bold>Software:</bold>
RD YL.</p>
</list-item>
<list-item>
<p>
<bold>Supervision:</bold>
RD.</p>
</list-item>
<list-item>
<p>
<bold>Validation:</bold>
KO RD.</p>
</list-item>
<list-item>
<p>
<bold>Visualization:</bold>
KO RD.</p>
</list-item>
<list-item>
<p>
<bold>Writing – original draft:</bold>
KO RD YL.</p>
</list-item>
<list-item>
<p>
<bold>Writing – review & editing:</bold>
KO RD YL.</p>
</list-item>
</list>
</p>
</fn>
<corresp id="cor001">* E-mail:
<email>roger.dean@westernsydney.edu.au</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>20</day>
<month>12</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>11</volume>
<issue>12</issue>
<elocation-id>e0167643</elocation-id>
<history>
<date date-type="received">
<day>8</day>
<month>7</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>17</day>
<month>11</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>© 2016 Olsen et al</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Olsen et al</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="pone.0167643.pdf"></self-uri>
<abstract>
<p>Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous
<italic>sounds</italic>
, rather than instrumental discontinuous
<italic>notes</italic>
. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event ‘hazard’ analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral measures linked to timbre strengthened the models. Overall, results show that even when little of the pitch and rhythm information important for phrasing in note-based music is available, phrasing is still perceived, primarily in response to changes of intensity and timbre. Implications for electroacoustic music composition and music recommender systems are discussed.</p>
</abstract>
<funding-group>
<award-group id="award001">
<funding-source>
<institution-wrap>
<institution-id institution-id-type="funder-id">http://dx.doi.org/10.13039/501100000923</institution-id>
<institution>Australian Research Council</institution>
</institution-wrap>
</funding-source>
<award-id>LP150100487</award-id>
<principal-award-recipient>
<name>
<surname>Dean</surname>
<given-names>Roger T.</given-names>
</name>
</principal-award-recipient>
</award-group>
<funding-statement>This research was funded by an Australian Research Council (ARC) Linkage Project Grant (LP150100487) awarded to RD. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<fig-count count="1"></fig-count>
<table-count count="7"></table-count>
<page-count count="29"></page-count>
</counts>
<custom-meta-group>
<custom-meta id="data-availability">
<meta-name>Data Availability</meta-name>
<meta-value>All relevant data are within the paper and its Supporting Information files.</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
<notes>
<title>Data Availability</title>
<p>All relevant data are within the paper and its Supporting Information files.</p>
</notes>
</front>
<body>
<sec id="sec001">
<title>1. Introduction</title>
<p>Phrasing is important for structuring auditory streams and facilitates the organization of auditory information [
<xref rid="pone.0167643.ref001" ref-type="bibr">1</xref>
,
<xref rid="pone.0167643.ref002" ref-type="bibr">2</xref>
]. Speech and music are two domains where phrasing is commonplace, though the descriptors used are very different. In speech, a phrase constitutes a low level of a complex hierarchy. A phrase usually comprises a few words that taken together in larger clauses can constitute meaning, most commonly in the form of a sentence. To determine boundaries between segments of any kind in speech (notably between words, clauses, and sentences), infant and adult listeners use acoustic cues such as changes in intensity/amplitude, rhythmic (durational) pattern, and pitch contour [
<xref rid="pone.0167643.ref003" ref-type="bibr">3</xref>
<xref rid="pone.0167643.ref005" ref-type="bibr">5</xref>
]. In note-based instrumental or vocal music, pitch-related aspects of the music are primary determinants of how listeners perceive and segment a musical phrase; for example, contrasts of pitch range and changes in melodic contour and tonal stress [
<xref rid="pone.0167643.ref001" ref-type="bibr">1</xref>
,
<xref rid="pone.0167643.ref006" ref-type="bibr">6</xref>
<xref rid="pone.0167643.ref009" ref-type="bibr">9</xref>
]. As in speech, the importance of pitch-related aspects of music for segmentation is observed at very early stages of development: infants six-months of age prefer to listen to segments of music delineated by a drop in pitch height, a segment-final increase in pitch duration in a melody, and a predominance of octave simultaneities [
<xref rid="pone.0167643.ref010" ref-type="bibr">10</xref>
]. The binding of instrumental pitched (and unpitched percussive) notes to rhythmic structure is also important in note-based music. That is, given a clear note energy envelope of attack, sustain, and decay, individual notes are separable and delineate rhythms associated with musical phrases.</p>
<p>However, contrast the case of 'sound-based' music, a category description developed by professional music analysts, composers, and improvisers. Sound-based music has been important for at least the last 60 years [
<xref rid="pone.0167643.ref011" ref-type="bibr">11</xref>
]. In characterizing sound-based music, Landy [
<xref rid="pone.0167643.ref012" ref-type="bibr">12</xref>
] recognizes a continuum between it and note-based music. At one extreme, for example, a rhythmic instrumental pop-song or Beethoven's Waldstein Sonata for piano constitutes note-based music. These pieces are characterized by discrete events with onset attacks, decays, offsets, clear rhythmic patterns, and pitch-based melodies and harmonies that are generally realized by human performers. At or near the other extreme fall noise music, much electroacoustic music, and sound art (all illustrated in the works included in the present study). There may be few if any discrete events in sound-based music; that is, events separated by clear acoustic offsets and onsets with nothing intervening. Instead, sound continua are commonplace and often intended as acousmatic music (i.e., music for realization by loudspeakers rather than performers) and sometimes largely or entirely computer generated [
<xref rid="pone.0167643.ref013" ref-type="bibr">13</xref>
]. Consequently, there may be few transparent rhythmic repetitions and pitch may be essentially absent, though both can be readily asserted (as commonly exploited in electronic dance music). From a musician's compositional and analytical point of view (but not necessarily the point of view of untrained listeners), the focus of such music moves from controlling changes in pitch, harmony, rhythm, and intensity, to changes in timbre and intensity. There are numerous intermediate forms. For example, some glitch music creates obvious recurrent rhythms from digital artefacts superimposed on sound continua. Similarly, the genre of ‘drum and bass’ often comprises a sound continuum in conjunction with rhythmic percussive and bass frequency patterns, the latter sometimes not articulated with the sharp attacks and decays that occur in most instrumental music. Speech (and less so singing) is also intermediate form, and its digital transforms are commonly used in electroacoustic sound-based composition [
<xref rid="pone.0167643.ref014" ref-type="bibr">14</xref>
]. The sound sources most fundamental to biology, those of the environment, also normally fall into this intermediate sound-based continuum, and we include an example of such sound in our study.</p>
<p>Given this theoretical contrast between note- and sound-based music, we investigate here whether phrasing in sound-based music can be perceived by untrained listeners, and if so, what might be the structural elements important for such perception of musical phrasing when common pitch and rhythmic aspects of note-based music are removed or reduced, at least from the perspective of the music creators. What we know about note-based music’s formal description and the perception of its musical phrasing may not translate to sound-based music. In the presence of limited pitch and rhythmic information (unlike note-based music), we propose that additional acoustic attributes such as changes in timbre may join with changes of intensity to explain listeners’ perception of phrases in sound-based music. We do not anticipate that timbre and intensity are uninvolved in phrasing in note-based music; they do impact note-based phrasing in both tonal and atonal music [
<xref rid="pone.0167643.ref006" ref-type="bibr">6</xref>
,
<xref rid="pone.0167643.ref007" ref-type="bibr">7</xref>
,
<xref rid="pone.0167643.ref015" ref-type="bibr">15</xref>
]. Rather, we hypothesize that timbre and intensity dominate over pitch- and rhythm-related attributes in sound-based music in particular.</p>
<p>Although most listeners are unfamiliar with sound-based music [
<xref rid="pone.0167643.ref016" ref-type="bibr">16</xref>
], there is strong evidence that listeners unfamiliar with a particular musical style are still able to understand and extract important structural aspects [
<xref rid="pone.0167643.ref017" ref-type="bibr">17</xref>
]. This is especially evident in the context of cross-cultural expression of emotion in music. Listeners unfamiliar with certain culture-specific styles of music are able to successfully extract emotional meaning through acoustic cues common across cultures, even though specific culturally determined cues and conventions are not initially available to them [
<xref rid="pone.0167643.ref017" ref-type="bibr">17</xref>
,
<xref rid="pone.0167643.ref018" ref-type="bibr">18</xref>
]. We suggest that a similar mechanism applies here: although listeners unfamiliar with sound-based music will not necessarily immediately understand or extract the genre-specific conventions of such music, they will nevertheless use available acoustic cues common to most genres (and to environmental or speech sounds) to perceive and segment the unfamiliar music; specifically, timbre and intensity. A now extensive body of work investigating the acoustic factors that predict affective responses to substantial extracts (2–3 minutes in duration) of several pieces of sound-based music supports this view (e.g., [
<xref rid="pone.0167643.ref016" ref-type="bibr">16</xref>
,
<xref rid="pone.0167643.ref019" ref-type="bibr">19</xref>
<xref rid="pone.0167643.ref022" ref-type="bibr">22</xref>
]). In this context, therefore, we will now address what little is already known about perceptual segmentation of sound-based music. We will then review relevant research regarding perceptual segmentation of speech. As mentioned, speech segmentation is considered here in the context of sound-based music because speech is a congener of sound-based music, is widely experienced as environmental sound, and directly informs the design of our study.</p>
<sec id="sec002">
<title>1.1. Perceptual segmentation of sound-based music</title>
<p>The perceptual formation of multiple streams within an ongoing sound structure is often termed auditory scene analysis [
<xref rid="pone.0167643.ref002" ref-type="bibr">2</xref>
]. Specifically, it is known that the formation of continua between tones (such as tones in two different pitch registers) makes it more difficult for listeners to segregate streams of auditory information [
<xref rid="pone.0167643.ref023" ref-type="bibr">23</xref>
]. Segregation is commonly influenced by both static and dynamic aspects of acoustic spectra (e.g., with instrumental timbres, native and modified) [
<xref rid="pone.0167643.ref024" ref-type="bibr">24</xref>
,
<xref rid="pone.0167643.ref025" ref-type="bibr">25</xref>
]. Such segregation data suggest that our sound-based continua may be segmented less frequently than note-based music, and that both static and dynamic spectral aspects may again contribute. Therefore, we aim to detect segmentation amongst successive components of a sound-based piece of music, whether there are multiple sonic streams or not.</p>
<p>There have been only a few experiments investigating the dynamic elements important for event segmentation of sound-based music. For example, Bailes and Dean [
<xref rid="pone.0167643.ref026" ref-type="bibr">26</xref>
<xref rid="pone.0167643.ref028" ref-type="bibr">28</xref>
] used directly apposed and briefly cross-faded pairs of short segments of sound (5 or 11 seconds in duration). They found that listeners were able to perceive segments comprising noise, sine-waves, ‘busy’ sounding electronic segments, chimes, water, and drum loops when the boundaries of perceived segments were characterized by abrupt changes in acoustic intensity (experienced perceptually as loudness; for a review, see [
<xref rid="pone.0167643.ref029" ref-type="bibr">29</xref>
]) and spectral flatness (a measure closely related to perceived timbre [
<xref rid="pone.0167643.ref028" ref-type="bibr">28</xref>
]). There was an asymmetry in boundary perception, in that a pair of sounds were well segmented when the intensity increased or when a frequency band was added, but not vice versa; a result consistent with the effects of auditory masking [
<xref rid="pone.0167643.ref030" ref-type="bibr">30</xref>
,
<xref rid="pone.0167643.ref031" ref-type="bibr">31</xref>
].</p>
<p>In [
<xref rid="pone.0167643.ref014" ref-type="bibr">14</xref>
], a range of excerpts including some sound-based pieces, each of the order of two minutes, was studied (with mostly untrained listeners) for implicit perceptual segmentation by taking a continuous measure of perceived change and using statistical 'change-point' analysis to determine segments. This analysis primarily delineates when a segment (in this case of a time-series of perceived change) differs from another, within defined statistical limits, and not simply a point of change as its name somewhat misleadingly implies. The results showed that segments defined from a musicological point of view, and in some cases on the basis of changing human agency, coincided quite closely with those detected in the continuous perceptions for sound-based, note-based, and poetic artificial-language speech stimuli [
<xref rid="pone.0167643.ref014" ref-type="bibr">14</xref>
]. Acoustic features that might predict this segmentation were not investigated.</p>
<p>In another interesting approach to the question of perceptual segmentation of sound-based music, a 38-second section of an electroacoustic piece,
<italic>Ciguri</italic>
by Chilean composer Felipe Otondo, was presented to 22 participants, 81% of whom were music students or lecturers (i.e., trained musicians) [
<xref rid="pone.0167643.ref032" ref-type="bibr">32</xref>
]. This piece falls into an intermediate grouping between the sound- and note-based music definitions above, mainly because it has an insistent and virtually isochronic rapid percussion attack, together with one or more streams of sustained electroacoustic sound with somewhat clear pitch structure. Participants pressed a button when they perceived a change in the music. Listeners varied from perceiving 0 to 16 segments, with a mean of 3.9 (i.e., a mean duration of about 10 seconds). Kernel density estimation was used to produce a single sequence of data representing the frequency with which the whole group of participants detected a change. This sequence was interpreted as representing a consensus of six segments. The study segmented the piece computationally for comparison with this. Fast Fourier transforms (FFT) of the audio signal (sampled at 20Hz) were used to compute a self-similarity matrix from which a 'novelty' measure across the series was obtained. The resulting sequential distance measures were used to detect peaks that were subsequently taken to be points of segmentation. These methods have been discussed in detail in relation to annotation of acousmatic music [
<xref rid="pone.0167643.ref033" ref-type="bibr">33</xref>
]. Garay [
<xref rid="pone.0167643.ref032" ref-type="bibr">32</xref>
] found considerable similarity between the perceived and computed measures, and semantic analysis suggested important roles of perceived loudness, pitch, and timbre for perceptions.</p>
<p>In a follow up experiment using a stimulus comprising a single-stream sequence of concatenated everyday sounds, but with only 33% of participants being music students or lecturers, participants' perceptions were interestingly closer to the computational segmentation in [
<xref rid="pone.0167643.ref014" ref-type="bibr">14</xref>
] than to the concatenation points, even though little attempt was made to smooth the concatenation of the sounds. The computational measure, while using the whole spectrum of the sound by FFT, was not designed in terms of specific spectral components nor to identify acoustic predictors of perceptions.</p>
<p>Although computational annotation of acousmatic music has been successful in defining segments detected by analysts, or sometimes prefigured by composers [
<xref rid="pone.0167643.ref033" ref-type="bibr">33</xref>
], it has to be noted that there is a substantial lack of data on such music (i.e., perceptual responses by a significant number of individuals, whether expert or not). Consistent with this, a recent paper reviews the disparities in outlook and purpose (even towards timbre) between music psychology and other fields, and suggests approaches to bring them together productively [
<xref rid="pone.0167643.ref034" ref-type="bibr">34</xref>
]. The present study begins to address such disparities by investigating predictors amongst acoustic factors, particularly spectral features, which might model and potentially explain listeners' perceptions. We also focus on the majority of musical audiences: people without specific musical training.</p>
</sec>
<sec id="sec003">
<title>1.2. Perceptual segmentation of speech and its relevance to sound-based music</title>
<p>Knowledge of language learning in the speech domain can inform our investigation of perceptual segmentation of sound-based music and musical phrasing. A musical phrase, as used in musicology and delineated in our instructions to participants (see below), is a grouping of multiple distinguishable events, be they notes or sounds, discrete or continuous. The closest analogous units of speech, in which we are almost all expert, are clauses and sentences rather than the usually much shorter linguistic phrases. The vast literature on perception of segments in speech across languages largely focuses on learning and recognition of words, rather than larger units, and on cross-linguistic aspects of language learning across the lifespan (for an in depth review, see [
<xref rid="pone.0167643.ref035" ref-type="bibr">35</xref>
]). Languages are commonly classified into stress- and tone-languages [
<xref rid="pone.0167643.ref035" ref-type="bibr">35</xref>
,
<xref rid="pone.0167643.ref036" ref-type="bibr">36</xref>
]. In stress languages, units such as words are distinguished from each other to a considerable extent by the patterns of acoustic intensity they contain. In tone languages, a much greater role is played by changes in fundamental frequency (the lowest frequency energy peak in the speech entity spectrum, commonly described as 'pitch') and sometimes consistent relative changes in the first formant and higher partials.</p>
<p>A listener confronted with unfamiliar music from, for example, a culture other than their own or from sound-based music (assuming as normal that their prior exposure was primarily to note-based music), is in a position of inexperience somewhat related to a baby learning its native language, or more closely, an adult learning a second language; especially those learning a tone-based language whose primary language is stress-based, or vice versa. Such an unfamiliar musical listener may closer represent an adult learning a second language because they may have gained fluency in cognizing key features, but at the expense of cognizing contrasting features that are more important to the new kind of music. This process in speech learning is sometimes termed perceptual 'desensitization' [
<xref rid="pone.0167643.ref037" ref-type="bibr">37</xref>
,
<xref rid="pone.0167643.ref038" ref-type="bibr">38</xref>
].</p>
<p>So what does our knowledge of the learning of language and speech suggest about the learning and segmentation of sound-based music and musical phrasing? First, studies of artificial languages (constructed from phonemic units similar to those of a genuine language, but concatenated without forming genuine words) or artificial phoneme contrasts reveal that learning aspects of the segmentation of languages can be very fast, sometimes occurring even within two minutes [
<xref rid="pone.0167643.ref039" ref-type="bibr">39</xref>
]. Second, such studies suggest that a few features are very important for the learning of word-boundaries. A detailed review of infant language development [
<xref rid="pone.0167643.ref040" ref-type="bibr">40</xref>
] points out that spoken words run into each other, blurring word boundaries; a blurring, we argue, that may be similar to the continuum between note- and sound-based music. Once our native language is learnt, we begin to sustain the illusion that boundaries are clear [
<xref rid="pone.0167643.ref035" ref-type="bibr">35</xref>
]. Learning the segmentation in the first place (often described as the 'bootstrapping problem') is largely dependent on developing relative weightings for the relevance of pause duration, pitch, and pre-boundary lengthening [
<xref rid="pone.0167643.ref040" ref-type="bibr">40</xref>
,
<xref rid="pone.0167643.ref041" ref-type="bibr">41</xref>
]. Pause does not refer to silence, but rather the relative length of time between increments of acoustic intensity. Pre-boundary lengthening refers to the fact that the length of the smallest phonemic units comprising a word is often greatest for the last. Note that in acoustic terms applicable to music, these factors could be considered, respectively, primarily as a pause in event activity, a timbral flux (the 'pitch' component), and a temporal elongation of a particular timbral element. In each case, these are delineated in conjunction with changes of acoustic intensity.</p>
<p>Successful computational speech segmentation based on intensity temporal profiles within narrow frequency bands across the spectrum [
<xref rid="pone.0167643.ref042" ref-type="bibr">42</xref>
] supports the suggestion that it is multiple frequency components of speech that are important (i.e., timbre) and not just 'pitch' per se, because pitch is a single dimensional representation of a sound, even when determined by (multi-dimensional) spectral patterns [
<xref rid="pone.0167643.ref043" ref-type="bibr">43</xref>
]. Statistical learning of adjacent and non-adjacent feature-dependency is important [
<xref rid="pone.0167643.ref044" ref-type="bibr">44</xref>
,
<xref rid="pone.0167643.ref045" ref-type="bibr">45</xref>
], but infants probably cannot solely rely on this to ‘bootstrap’ [
<xref rid="pone.0167643.ref040" ref-type="bibr">40</xref>
,
<xref rid="pone.0167643.ref046" ref-type="bibr">46</xref>
,
<xref rid="pone.0167643.ref047" ref-type="bibr">47</xref>
].</p>
<p>How do these aspects of learning word boundaries contribute to learning linguistic phrases, clauses, and sentence segmentation and their hierarchies? Johnson [
<xref rid="pone.0167643.ref040" ref-type="bibr">40</xref>
] emphasizes that an infant learns segmentation with respect to all linguistic units in parallel, with progressive acquisition of each ability. However, at the level of larger syntactic units such as clauses, the three features just emphasized (pause duration, pitch, and pre-boundary lengthening) now contribute to speech 'prosody' together with more substantial variations in acoustic intensity. Such prosody often systematically delineates the larger syntactic units and their hierarchies [
<xref rid="pone.0167643.ref048" ref-type="bibr">48</xref>
]. Again, prosodic cues are weighted [
<xref rid="pone.0167643.ref041" ref-type="bibr">41</xref>
] and this can vary between individuals and languages [
<xref rid="pone.0167643.ref049" ref-type="bibr">49</xref>
]. The 'Edge Hypothesis' has been developed to explain universal cues to word boundaries delivered by these features, but with reference not only to words, but also speech phrases, clauses, and sentences [
<xref rid="pone.0167643.ref050" ref-type="bibr">50</xref>
].</p>
<p>Even this brief summary of the speech literature indicates that besides pitch, timbre, and intensity, an investigation of phrase structure in music needs to emphasize the consideration of silences, or at least 'pauses' in activity characterized by decreased acoustic intensity and less frequent small events. In the case of sound-based music, we cannot readily estimate the frequency of small events (given they are presently undefined), and so pauses are represented in the intensity flow and particularly in the extent of variation relative to its mean value. We include this parameter in our analyses. For a more in depth review of speech-music parallels, see [
<xref rid="pone.0167643.ref036" ref-type="bibr">36</xref>
].</p>
</sec>
<sec id="sec004">
<title>1.3. Acoustic features of timbre in sound-based music</title>
<p>To investigate the role of intensity and timbre in listeners’ perception of musical phrases in sound-based music, we first measured acoustic intensity as a global proxy for perceived loudness, and spectral flatness as a global proxy for perceived timbre. However, numerous additional spectral parameters have previously been linked to the elusive notion of ‘timbre’ [
<xref rid="pone.0167643.ref051" ref-type="bibr">51</xref>
,
<xref rid="pone.0167643.ref052" ref-type="bibr">52</xref>
]. Therefore, a more detailed timbral analysis was also conducted here using a range of additional parameters based on the recommendations in [
<xref rid="pone.0167643.ref053" ref-type="bibr">53</xref>
], where development and application of a Matlab Timbre Toolbox for investigating perception of dissimilarity between pairs of short instrumental sounds is described. The stimuli studied in [
<xref rid="pone.0167643.ref053" ref-type="bibr">53</xref>
] were thus in dramatic contrast to those of the present sound-based music, so the recommendations of the paper are treated cautiously here. Nevertheless, we followed the general suggestion to use a measure of both central tendency and temporal dispersion of each descriptor, all of which are here time varying. As a result, the parameters related to timbre that were used in our additional analysis of perceived musical phrasing in sound-based music were spectral centroid, spectral flux, spectral spread, inharmonicity and roughness. These were the features which showed inter-correlations of < .5 in previous analyses of instrumental sounds [
<xref rid="pone.0167643.ref053" ref-type="bibr">53</xref>
].</p>
</sec>
<sec id="sec005">
<title>1.4. Aim, design, and hypotheses</title>
<p>The aim of the present study was to investigate whether musically untrained listeners could perceive phrases in the varied music presented, and if so, to determine the structural elements important for perception of musical phrasing in sound-based music in particular; music where pitch and rhythmic aspects as enunciated in instrumental note-based music are removed or reduced and transformed. The work in [
<xref rid="pone.0167643.ref026" ref-type="bibr">26</xref>
<xref rid="pone.0167643.ref028" ref-type="bibr">28</xref>
] showed that specific acoustic parameters related to changes in loudness and timbre can elicit perceived segmentation, even when no familiar note-based cues were present. However, the segments in their stimuli were temporally predefined on each occasion to either 5 s or 11 s. In the present study, we do not predetermine phrase durations, but rather, allow the listener to choose when a phrase has ended. It is nevertheless desirable to define the minimum temporal window for phrase perception in our study. To this end, recent work on the acoustic influences of continuously perceived affect has shown that such influences operate over periods of approximately five seconds and longer [
<xref rid="pone.0167643.ref016" ref-type="bibr">16</xref>
,
<xref rid="pone.0167643.ref019" ref-type="bibr">19</xref>
<xref rid="pone.0167643.ref022" ref-type="bibr">22</xref>
,
<xref rid="pone.0167643.ref054" ref-type="bibr">54</xref>
]. Taken together with the work of Bailes and Dean [
<xref rid="pone.0167643.ref026" ref-type="bibr">26</xref>
<xref rid="pone.0167643.ref028" ref-type="bibr">28</xref>
], this suggests a temporal window of five seconds as an appropriate minimum duration to address perceived musical phrasing in sound-based music. Furthermore, for perceived phrases longer than five seconds, acoustic attributes associated with the final five seconds of a perceived musical phrase may be disproportionally influential; a kind of ‘recency’ effect that commonly explains a memory recall advantage for the most recent item in a series of items [
<xref rid="pone.0167643.ref055" ref-type="bibr">55</xref>
]. Therefore, we investigate the importance of acoustic intensity and spectral parameters for listeners’ perception of musical phrases by analyzing: (1) the acoustic content across each entire perceived phrase (a whole-phrase ‘global’ approach); and (2) the acoustic content comprising the final five seconds of a perceived phrase (a ‘terminal portion’ approach).</p>
<p>We used a mixed methods approach with a range of auditory stimuli comprising sound-based music, note-based instrumental music, and environmental sound (see
<xref ref-type="sec" rid="sec006">Methods</xref>
section for more detail). Participants made qualitative descriptions of each perceived musical phrase throughout each stimulus, and timings reflecting the onset and offset boundaries of each perceived musical phrase were measured. Thematic analysis of qualitative data led to three categories describing the most salient aspect of all perceived phrases: ‘Intensity’, ‘Timbre’, and particularly in the case of the instrumental note-based music we studied, ‘Rhythm’. These qualitative descriptions for each perceived phrase are presented in the tables in the Supporting Information file (
<xref ref-type="supplementary-material" rid="pone.0167643.s002">S1 Appendix</xref>
). Analysis of acoustic features associated with these three categories was carried out to extract time-series data for intensity, spectral flatness (timbre), and rhythmic density (for the instrumental note-based stimulus). These acoustic time-series data were then used to model listeners’ perception of musical phrases both globally and also on a phrase category-by-category basis. In a subsequent analysis, several other spectral parameters (spectral centroid, spectral flatness, spectral spread, inharmonicity, and roughness) were assessed as possible additional predictors of phrase perception. Specifically, it was hypothesized that:</p>
<list list-type="order">
<list-item>
<p>Sound-based music and environmental sound does elicit phrase perceptions, and these are primarily associated with temporal changes in acoustic parameters of intensity and timbre.</p>
</list-item>
<list-item>
<p>Temporal change in rhythmic pattern and rhythmic density is a major factor in modeling listeners’ perception of phrases in instrumental note-based music.</p>
</list-item>
<list-item>
<p>Pauses in the continuity of music are important in segmentation of sound-based and note-based music, as they are in speech.</p>
</list-item>
</list>
</sec>
</sec>
<sec id="sec006">
<title>2. Method</title>
<sec id="sec007">
<title>2.1. Participants</title>
<p>Twenty adult psychology students were recruited from the University of Western Sydney (17 females and 3 males;
<italic>M</italic>
= 21.84 years,
<italic>SD</italic>
= 5.42, range = 18–41 years). Three participants had received individual musical training (
<italic>M</italic>
= 5.67 years,
<italic>SD</italic>
= 5.69, Range = 1–14 years). All reported normal hearing. This research was conducted according to the principles expressed in the Declaration of Helsinki and approved by Western Sydney University’s Human Research Ethics Committee (approval #H9869). All participants provided written informed consent.</p>
</sec>
<sec id="sec008">
<title>2.2. Stimuli and equipment</title>
<p>Stimuli comprised six pre-recorded excerpts that ranged amongst sound-based music, instrumental note-based music, and environmental sound. The rationale for selection was to contrast sonic material that comprised no obvious instrumentation, a hybrid combination of sound sources, and note-based instrumental music. A brief analytical/compositional description of each stimulus is provided below:</p>
<list list-type="order">
<list-item>
<p>BBC SoundFX CD20 Weather(1) (1989)
<italic>‘Tree Creaking in Strong Wind’</italic>
(2’20”): This excerpt is from a field recording of wind moving through trees (it is not a human 'composition'). The overall characteristics of the excerpt resemble noise, but in a naturalistic context.</p>
</list-item>
<list-item>
<p>Martin Ng and Roger Dean (2000)
<italic>‘LowHz’</italic>
(2’13”): A noise-based piece excerpted from an immersive real-time computer improvisation using audio software MAX/MSP (Cycling 74, San Francisco). A complete version is on compact disc in Dean [
<xref rid="pone.0167643.ref056" ref-type="bibr">56</xref>
].</p>
</list-item>
<list-item>
<p>Brian Eno (1992)
<italic>‘Francisco’</italic>
(2’20”). This piece is primarily ambient in its composition with little obvious human agency or identifiable sound source.</p>
</list-item>
<list-item>
<p>Trevor Wishart (1977)
<italic>‘Red Bird</italic>
,
<italic>a political prisoner’s dream’</italic>
(2’17”): This was excerpted from a recording on UbuWeb of this 45 min piece for tape. It has a strong narrative of hybrid combinations of human and animate sounds.</p>
</list-item>
<list-item>
<p>Ludwig van Beethoven (1804) ‘
<italic>Sonata No</italic>
.
<italic>21 in C major</italic>
,
<italic>Op</italic>
.
<italic>53 Waldstein’</italic>
(the opening 2’26”). This involves a single note-based musical instrument (piano) with obvious human agency and urgent repetitive rhythmic drive.</p>
</list-item>
<list-item>
<p>Iannis Xenakis (1955)
<italic>‘Metastaseis’</italic>
(1’59”). An orchestral piece with multiple instruments and apparent human agency. Although this excerpt involves instruments, it is characterized by multi-instrument sound clusters and large glissandi that make it sound closer to noise-based music than prototypical orchestral note-based music. The excerpt mostly lacks clear repetitive rhythms.</p>
</list-item>
</list>
<p>Practice trials used two additional stimuli: excerpts of the first movement from Mozart’s
<italic>‘Symphony No</italic>
.
<italic>40’</italic>
(1’18”) and of Xenakis’s electro-acoustic piece
<italic>‘Orient-Occident’</italic>
(1’29”). Each stimulus excerpt in the experiment was presented as an.aiff stereo 16 bit audio file with a 44.1kHz sampling rate. The pieces 1–6 were always presented in the sequence listed above to achieve a gradation from sound based items to instrumental music and then from piano to orchestra, as well as to establish a positive gradient of the extent of human agency likely perceived in the performances. This was designed to minimize a continuing focus on roles of obvious human agency or physical source origins in the sound-based test stimuli. Given the relatively long duration of the stimuli and the considerable familiarization that would likely occur across the time-course of the experiment, a conventional counterbalanced order would not have homogenized the responses. The experiment was conducted in a sound attenuated booth and stimuli were presented binaurally through Sennheiser HD25 headphones. An Apple MacBook Pro laptop computer (System 10.6.2) using a custom written program in MAX/MSP (Cycling 74, San Francisco) software displayed on-screen instructions and response buttons, presented stimuli, and continuously recorded data.</p>
</sec>
<sec id="sec009">
<title>2.3. Procedure</title>
<p>Participants first read an experiment information sheet, gave written informed consent, completed a brief demographic questionnaire, and then received standardised instructions regarding the experiment. Participation was divided into two tasks per stimulus: (1) a continuous phrase-task response; and (2) an intermittent phrase-detection task including qualitative feedback. The continuous phrase-detection task required participants to press the ‘space-bar’ on the computer keyboard each time they perceived an end of a phrase. As the sample mainly comprised non-musicians, we did not wish to assume that participants would understand the concept of a musical 'phrase'. Therefore, we instructed participants to respond to ‘events’ rather than ‘musical phrases’, but used examples from speech to define what was meant by an ‘event’ in the experiment. Specifically, the instruction was: “…when listening to each excerpt of music, indicate the times throughout the music where you perceive the end of an event”. We elaborated further on how to conceptualize an event: “You can think of an event to be a little bit like a sentence or clause in day-to-day speech, but in this experiment it is in the context of non-speech auditory material. For example, events have a beginning, they occur over different durations of around five seconds or even longer, will comprise pieces of auditory information that may vary in different ways throughout the event, and of course, will have an ending.” These examples were intended to clarify that potentially perceived events were not to be considered in extremely short time-frames, such as individually sounded notes in note-based music, but rather, relatively longer time frames that in musicological terms coincide more closely with musical ‘phrases’ than musical notes.</p>
<p>When each response was made in the continuous phrase-detection task, the stimulus continued to play until completion, hence the use of the label ‘continuous’ phrase-detection. Each response was time-stamped and constituted the end of one perceived phrase. Thus, the beginning of the next phrase was time-marked as 1 ms after each button-press response. A 1kHz pure tone was presented within each stimulus at a set time between 10–20 s after stimulus onset (randomly determined for each stimulus for all participants before commencing the experiment). Participants were required to make their first button-press response when they heard the pure tone. For analytical purposes, this first response marked the beginning of the first phrase, the end of which was determined by the participant’s next response. The duration between stimulus onset and the pure tone was designed to allow enough time for participants to become aware of the style and content of each stimulus before making their responses. This ‘orientation’ period was important because the stimulus set comprised varied unfamiliar musical genres and complexities.</p>
<p>Once participants completed the continuous phrase-detection task for one stimulus, the same stimulus was then repeated in the following trial, but with an intermittent phrase-detection task. In this second task, the participant made a button-press response indicating they had perceived the end of a phrase, upon which the music stopped, a text-box appeared on the computer monitor, and the participant was instructed to: “please use the keyboard to describe in a sentence or two: what was happening in the music that made you perceive the end of an event? In other words, try your best to write out the reasons why you made your response at that particular point in the music.” Once the participant had made their qualitative response, they clicked a ‘continue’ button and the music continued playing from the point at which it had stopped. The resulting data thus included a series of time-stamped responses for each stimulus combined with qualitative data describing the characteristics of each response.</p>
<p>We expected there to be familiarisation during the first task (which might enhance sensitivity to phrase detection), but we also expected the more extensive demands of the second task to restrict the number of phrases a participant would register, whether because of a wish to make the task as easy as possible or because of the need for self-justification of each decision. Thus we had no predictions as to the relative number of responses in first versus second tasks, but we did expect responses in the second task to mostly reflect those that occurred during the first. Because ‘task’ was not intended as an independent variable in our experiments, we merely present simple descriptive statistics relating performance between both tasks (see
<xref ref-type="sec" rid="sec011">Results</xref>
below).</p>
<p>Once the continuous and intermittent event-detection tasks were completed for one stimulus, the next stimulus was presented and the two tasks were completed again, and so on until all six stimuli had been presented. Therefore, each stimulus was presented twice in the experiment for a total of 12 trials (not including practice), with two tasks completed for each stimulus. For statistical modeling presented below, only responses from the second task are presented because they contain the onset/offset time-stamps for each perceived phrase in addition to qualitative data describing each response. Overall, the experiment took approximately one hour to complete.</p>
</sec>
<sec id="sec010">
<title>2.4. Acoustic analyses and statistical approach</title>
<p>Three acoustic measures relevant for the three main features of intensity, timbre, and rhythm (in the case of the Beethoven
<italic>Waldstein Piano Sonata</italic>
) were obtained by means of acoustic analysis. First, the intensity (dB SPL; sound pressure level measured in decibels) and spectral flatness (Wiener entropy) profile of each stimulus was obtained using Praat (version 5.3.23) at a 2Hz sampling rate. These measures have been detailed previously [
<xref rid="pone.0167643.ref016" ref-type="bibr">16</xref>
,
<xref rid="pone.0167643.ref019" ref-type="bibr">19</xref>
]. For rhythm in the
<italic>Waldstein Piano Sonata</italic>
, the number of acoustic note onsets was calculated and summed within 500 ms windows to give an indication of the change in rhythmic density (or sparsity) that occurs (this was achieved with the aid of Sonic Visualiser, but careful adjustments and additions to the events identified by the several note onset detection algorithms were required). Five additional spectral parameters related to timbre, as explained above, were also measured for use in more elaborate statistical models of perceived phrasing. These were calculated using MAX/MSP (Cycling 74, San Francicso) with the zsa package (spectral centroid, flux, spread), the CNMAT (University of California, Berkeley) patch for roughness, and Alex Harker's package for inharmonicity.</p>
<p>From each such continuous parameter, three unitary predictors were derived for each phrase under analysis, consistent with the recommendations in [
<xref rid="pone.0167643.ref053" ref-type="bibr">53</xref>
]. Note that phrases and corresponding predictors remained defined on an individual participant basis; in other words, phrase start and end times were not averaged across participants. The three derived predictors were: the mean value, the mean of its differenced values across a phrase (a representation of the overall magnitude and direction of rate of change) and the mean of the absolute values of that differenced series (a representation of the variability of the rate of change). For the single case in which rhythm was strong (the Beethoven
<italic>Waldstein Piano Sonata</italic>
), we treated the rhythmic density series in the same way as the other predictors. Thus, there were three potential predictors derived for each of the features of intensity, timbre, and rhythm. For Beethoven’s
<italic>Waldstein Piano Sonata</italic>
, we also considered 'gaps' (or extreme sparsity) in the note stream as predictors of perceived phrases. For convenience, we refer to these factors generically below as 'acoustic' features, even though the rhythmic feature is arguably musical.</p>
<p>For each stimulus, the overall perceived phrase timing series represent a 'clustered recurrent event process' [
<xref rid="pone.0167643.ref057" ref-type="bibr">57</xref>
]. In essence, this means that across participants the series of times characterising phrases are not simply randomly dispersed in time (as in a Poisson process), but clustered around particular successive times (as can be seen in the
<xref ref-type="sec" rid="sec011">results</xref>
). Therefore, the influence of the three main acoustic features on participants’ perception of phrasing can be modelled using variants of ‘survival analysis’ based on Cox hazard modeling [
<xref rid="pone.0167643.ref058" ref-type="bibr">58</xref>
]. Survival analysis is typical in modeling the expected amount of time until one (e.g., death of an organism) or a series of events occur, most commonly in biological or mechanical systems. Recurrent events may be considered as either failures as a result of 'frailty' (such as succumbing to an infection) or successes as a result of 'achievement' (such as here identifying that a phrase has ended). The fact that each individual participant has repeated ‘successes’ through successive phrase identifications means that each response cannot be assumed to be independent. We allow for this by using a 'cluster' procedure that provides robust standard errors for overall models.</p>
<p>The original Cox procedure [
<xref rid="pone.0167643.ref058" ref-type="bibr">58</xref>
] utilised 'proportional hazards': the hazard change (i.e., the change in the 'risk' of the event occurring) was assumed to bear a constant linear relation to the ordinal and continuous predictors: an
<italic>n-</italic>
percent change in a predictor caused a fixed-percent change in the value by which the hazard (risk) differed from its baseline value. We use developments of the model that permit time-dependent coefficients upon these predictors by means of the 'Survival' package in
<italic>R</italic>
(version 3.2.2) with algorithms coded in the
<italic>RStudio</italic>
development environment (version 0.99.491), mainly involving the core ‘cox.ph’ function.</p>
<p>Modeling was undertaken in a defined procedure. For the initial models of all perceived phrases, allowed predictors were the three forms (defined above) of averaging intensity and spectral flatness time series, and the rhythm density series for the Beethoven’s
<italic>Waldstein Piano Sonata</italic>
stimulus. Up to two time transformed predictors (abbreviated ‘tt’ in the Tables) were allowed, either a linear x*t or a logarithmic x*log(t) transform: no attempt was made to optimise these further. The transform simply multiplies the predictor variable ‘x’ by the time function, and the model optimises the coefficient of the resulting derived predictor. The predictors to be tested as time transforms were chosen as those with the largest rho parameter in the Survival package zcox.ph function, which are indicative of the largest variations with time. Models were selected for parsimony by removing all variables that were not individually significant, unless the removal caused a total failure of the model (two cases only, displayed with an asterisk in Table 4 of the
<xref ref-type="sec" rid="sec011">Results</xref>
). With the exception of these two cases, this also resulted in optimising the models’ Bayesian Information Criterion (BIC), a stringent assessment of parsimony that strongly penalises for the addition of predictors.</p>
<p>For separate models of perceived phrases categorised by timbre, intensity, or rhythm, exactly the same procedure was used. Note there is a limitation that the resultant time series do not contain all the data, and hence may not be fully representative of the original series, some of whose autocorrelations may be disrupted. Without time-dependent transforms, the models described so far commonly showed modest
<italic>R</italic>
<sup>2</sup>
values with fits of between .2 and .3, but these were often improved with time-dependence (
<italic>R</italic>
<sup>2</sup>
values between .35 and .5).</p>
<p>In the final analyses, we assessed the possible roles of five additional spectral predictors in the whole-phrase perception models. These predictors were also used in the three forms just described (mean, mean change, and mean absolute change on phrase by phrase and individual by individual basis). This resulted in models with up to 23 predictors. We limited the search path required to select amongst the possible models for parsimony by means of additional constraints: the starting model was always taken as the simpler model defined earlier (with only spectral flatness amongst the considered timbral predictors), plus the 15 new spectral predictors. This was refined by removal of individually non-significant predictors, after which the alternative time transforms were assessed. Finally, alternative interaction parameters were also considered on the basis of the most substantial predictors. Note that the possible value ranges of the spectral predictors varied and hence, their impact in the model depended not only on their coefficients.</p>
</sec>
</sec>
<sec id="sec011">
<title>3. Results</title>
<sec id="sec012">
<title>3.1. Descriptions of perceived musical phrases</title>
<p>It was clear that every participant could readily detect phrases in all stimulus items, though varying in mean duration as expected. Furthermore, these perceptions were coherent across participants (as revealed by the clustering in
<xref ref-type="fig" rid="pone.0167643.g001">Fig 1</xref>
). The mean duration of perceived phrases across all pieces and participants was 15.05 seconds (as judged at the sampling rate of
<xref ref-type="fig" rid="pone.0167643.g001">Fig 1</xref>
, 2Hz). There were fewest phrases perceived in the BBC
<italic>‘Wind’</italic>
environmental sound stimulus (appropriately, given its lack of compositional design) and most in the Wishart
<italic>‘Red Bird’</italic>
and Beethoven
<italic>‘Waldstein’</italic>
stimuli, which can readily be viewed as the most variegated of the pieces. The experiment was not designed to generate consistency between frequency of responses in the first ‘continuous phrase-detection’ task and the second more demanding and informative ‘intermittent phrase-detection’ task, but we conducted a descriptive analysis of the similarity of phrase identification responses between both tasks (based on frequency of occurrence). Due to the different requirements of the two tasks and an increase in familiarity (by design) from prior exposure in Task 1, we did not expect or require a high level of agreement in the frequency of phrase detection between responses in Task 1 and Task 2 for each stimulus. However, the frequency of responses in Task 1 and Task 2 differed by less than 7% when considered relative to the total number of responses for each of the Ng and Dean
<italic>‘LowHz’</italic>
, Wishart
<italic>‘Red Bird’</italic>
, Eno
<italic>‘Francisco’</italic>
, and Xenakis
<italic>‘Metastaseis’</italic>
stimuli. The frequency of phrase detection responses in Task 2 for the Beethoven
<italic>‘Waldstein’</italic>
stimulus was 12.10% lower than the frequency in Task 1, and 19.34% lower in Task 2 relative to Task 1 for the BBC
<italic>‘Wind’</italic>
environmental sound stimulus. For the remaining analyses, we consider only the phrase detection data from the second ‘intermittent phrase-detection’ and qualitative description task.</p>
<fig id="pone.0167643.g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0167643.g001</object-id>
<label>Fig 1</label>
<caption>
<title>Time-stamped phrase responses and acoustic time-series data.</title>
<p>Displays all time-stamped responses assigned to timbre, intensity, or rhythm categories across the entire time-course of all six stimuli. Acoustic categories were based on thematic analyses of participants’ qualitative descriptions of each perceived phrase. The diamond-shaped markers on the three horizontal scales in each panel signify these data and specifically, the point at which each phrase was perceived to have ended. The ‘×’ symbol signifies the moment in each stimulus where a 1kHz pure tone was presented. The pure-tone indicated to participants that they were to begin responding and was used to mark the beginning of the first perceived phrase. Time-series data of acoustic intensity (solid line; dB SPL on left y-axis) and spectral flatness (dashed line; Wiener entropy on right y-axis) are also plotted for each stimulus. All perceptual and acoustic data are presented at a sampling rate of 2Hz.</p>
</caption>
<graphic xlink:href="pone.0167643.g001"></graphic>
</fig>
<p>
<xref ref-type="fig" rid="pone.0167643.g001">Fig 1</xref>
displays six panels of time-series acoustic data in the form of intensity profiles (solid line; dB SPL on left y-axes) and spectral flatness (dashed line; Wiener entropy on right y-axes) for each stimulus. Furthermore, diamond-shaped markers on the three horizontal scales in each panel indicate all participants’ perceived phrase responses (quantized to 2Hz) across the time-course of each stimulus. From the process of thematic qualitative analysis, these responses were placed by the authors into categories of intensity, timbre, and rhythm, labeled as such in each of the three y-axis titles per panel. All qualitative descriptions of perceived phrases are presented in
<xref ref-type="supplementary-material" rid="pone.0167643.s002">S1 Appendix</xref>
. The number of responses in each category for each stimulus is presented in
<xref ref-type="table" rid="pone.0167643.t001">Table 1</xref>
.</p>
<table-wrap id="pone.0167643.t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0167643.t001</object-id>
<label>Table 1</label>
<caption>
<title>Number of perceived phrases assigned to intensity, timbre, or rhythm categories.</title>
</caption>
<alternatives>
<graphic id="pone.0167643.t001g" xlink:href="pone.0167643.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2" colspan="1">Stimulus</th>
<th align="left" rowspan="2" colspan="1">Stimulus Description</th>
<th align="center" colspan="4" rowspan="1">Assigned Acoustic Category</th>
</tr>
<tr>
<th align="center" rowspan="1" colspan="1">Timbre</th>
<th align="center" rowspan="1" colspan="1">Intensity</th>
<th align="center" rowspan="1" colspan="1">Rhythm</th>
<th align="center" rowspan="1" colspan="1">Total</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">BBC SoundFX:
<italic>Wind</italic>
</td>
<td align="left" rowspan="1" colspan="1">Environmental sound</td>
<td align="center" rowspan="1" colspan="1">24</td>
<td align="center" rowspan="1" colspan="1">68</td>
<td align="center" rowspan="1" colspan="1">2</td>
<td align="center" rowspan="1" colspan="1">94</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Ng & Dean:
<italic>LowHz</italic>
</td>
<td align="left" rowspan="1" colspan="1">Noise sound-based piece</td>
<td align="center" rowspan="1" colspan="1">68</td>
<td align="center" rowspan="1" colspan="1">25</td>
<td align="center" rowspan="1" colspan="1">7</td>
<td align="center" rowspan="1" colspan="1">100</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Eno:
<italic>Francisco</italic>
</td>
<td align="left" rowspan="1" colspan="1">Ambient sound-based piece</td>
<td align="center" rowspan="1" colspan="1">95</td>
<td align="center" rowspan="1" colspan="1">8</td>
<td align="center" rowspan="1" colspan="1">2</td>
<td align="center" rowspan="1" colspan="1">105</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wishart:
<italic>Red Bird</italic>
</td>
<td align="left" rowspan="1" colspan="1">Hybrid sound-based piece</td>
<td align="center" rowspan="1" colspan="1">123</td>
<td align="center" rowspan="1" colspan="1">22</td>
<td align="center" rowspan="1" colspan="1">1</td>
<td align="center" rowspan="1" colspan="1">146</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Beethoven:
<italic>Waldstein</italic>
</td>
<td align="left" rowspan="1" colspan="1">Instrumental note-based piece</td>
<td align="center" rowspan="1" colspan="1">32</td>
<td align="center" rowspan="1" colspan="1">31</td>
<td align="center" rowspan="1" colspan="1">64</td>
<td align="center" rowspan="1" colspan="1">127</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Xenakis:
<italic>Metastaseis</italic>
</td>
<td align="left" rowspan="1" colspan="1">Instrumental sound-based piece</td>
<td align="center" rowspan="1" colspan="1">45</td>
<td align="center" rowspan="1" colspan="1">49</td>
<td align="center" rowspan="1" colspan="1">6</td>
<td align="center" rowspan="1" colspan="1">100</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="t001fn001">
<p>
<bold>Note.</bold>
Categories assigned to each perceived phrase across participants were based on thematic analyses conducted by the authors of participants’ qualitative descriptions of each perceived phrase (see
<xref ref-type="supplementary-material" rid="pone.0167643.s002">S1 Appendix</xref>
).</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>As can be seen in
<xref ref-type="fig" rid="pone.0167643.g001">Fig 1</xref>
and
<xref ref-type="table" rid="pone.0167643.t001">Table 1</xref>
, the majority of perceived phrases for the BBC
<italic>‘Wind’</italic>
environmental sound stimulus (comprising primarily of wind rushing through tress) were associated with intensity. For the Ng and Dean
<italic>‘LowHz’</italic>
, Wishart
<italic>‘Red Bird’</italic>
, and Eno
<italic>‘Francisco’</italic>
stimuli, which comprised sound-based sculpting with little obvious instrumentation or note-based music, timbre was the primary descriptor. Perceived phrases in response to the instrumental yet primarily sound-based Xenakis
<italic>‘Metastaseis’</italic>
stimulus were equally categorized by timbre and intensity. However, rhythm was the main descriptor for the single-instrument note-based Beethoven
<italic>‘Waldstein’</italic>
stimulus (which has relatively narrow timbral range, being the sound of a single instrument). These descriptive results support the study’s hypotheses that phrases can be perceived in environmental sound and sound-based music, and that acoustic factors are recognized by participants. Uncategorizable phrase descriptions (placed in an 'Other' category) were infrequently observed, the exception being for the Beethoven
<italic>‘Waldstein’</italic>
stimulus, with 51 qualitative responses falling into the ‘Other’ category.</p>
<p>As can be seen in the tables in
<xref ref-type="supplementary-material" rid="pone.0167643.s002">S1 Appendix</xref>
, a large proportion of the 51 qualitative responses placed in the ‘Other’ category for the Beethoven
<italic>‘Waldstein’</italic>
stimulus were responses of ‘no comment’ (39.22%). Nevertheless, we conducted a follow-up analysis investigating whether the inclusion of the ‘Other’ category in models of phrase perception in response to the Beethoven
<italic>‘Waldstein’</italic>
stimulus altered the main models presented in the Results. Not surprisingly, models were worse when responses from the ‘Other’ category were included in the analysis, relative to the models that only included responses from ‘Intensity’, ‘Timbre’, and ‘Rhythm’ categories; however the effective predictors were hardly changed. We next turn to statistical models designed to determine the relative contributions of intensity, timbre, and rhythm for phrase perception.</p>
</sec>
<sec id="sec013">
<title>3.2. Modeling perceived musical phrases: A ‘whole-phrase’ global approach</title>
<p>As indicated, our first modeling purpose was to assess whether the hypothesised influence of intensity, timbre, and rhythmic change on phrase perceptions was plausible and significant (rather than solely to derive the best model of the clustered event processes).
<xref ref-type="table" rid="pone.0167643.t002">Table 2</xref>
shows a summary of the models obtained for each piece on the basis of the procedures described above and for all perceived phrases.
<xref ref-type="table" rid="pone.0167643.t003">Table 3</xref>
summarizes all predictors used and
<xref ref-type="table" rid="pone.0167643.t004">Table 4</xref>
provides the specific predictor coefficients required in each model of the whole-phrase perceptions. The first five stimuli are sound-based (one environmental sound stimulus and four sound-based music stimuli), since they contain few overt rhythmic phrases as judged by our participants (see
<xref ref-type="fig" rid="pone.0167643.g001">Fig 1</xref>
); for these stimuli we do not attempt to define a rhythmic predictor series. The sixth piece, Beethoven’s
<italic>‘Waldstein’</italic>
sonata contrasts in being highly rhythmic, again as perceived by participants, hence we consider this stimulus to be note-based and we use rhythmic predictor time-series data in the modeling. Pitch (a perceptual feature) is subsumed in the acoustic spectral flatness measure used throughout, but addressed more closely in the later models adding spectral centroid amongst the acoustic timbral predictors. Throughout the remainder of the paper, we discuss results in terms of these sound-based and note-based groupings.</p>
<table-wrap id="pone.0167643.t002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0167643.t002</object-id>
<label>Table 2</label>
<caption>
<title>Summary of selected cox hazard models of perceived phrases using the whole-phrase ‘global’ approach.</title>
</caption>
<alternatives>
<graphic id="pone.0167643.t002g" xlink:href="pone.0167643.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Stimulus</th>
<th align="center" rowspan="1" colspan="1">
<italic>R</italic>
<sup>2</sup>
</th>
<th align="center" rowspan="1" colspan="1">Likelihood Ratio</th>
<th align="center" rowspan="1" colspan="1">Model
<italic>p-</italic>
value</th>
<th align="center" rowspan="1" colspan="1">Robust Model
<italic>p-</italic>
value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">BBC SoundFX:
<italic>Wind</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.51</td>
<td align="char" char="." rowspan="1" colspan="1">67.07</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.002</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Ng & Dean:
<italic>LowHz</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.42</td>
<td align="char" char="." rowspan="1" colspan="1">54.40</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.019</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Eno:
<italic>Francisco</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.31</td>
<td align="char" char="." rowspan="1" colspan="1">37.94</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.003</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wishart:
<italic>Red Bird</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.38</td>
<td align="char" char="." rowspan="1" colspan="1">69.70</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.004</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Beethoven:
<italic>Waldstein</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.30</td>
<td align="char" char="." rowspan="1" colspan="1">45.32</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.023</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Xenakis:
<italic>Metastaseis</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.56</td>
<td align="char" char="." rowspan="1" colspan="1">82.30</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.060</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="t002fn001">
<p>
<bold>Note.</bold>
‘Model
<italic>p-</italic>
value’ refers to the
<italic>p-</italic>
value based on the Likelihood Ratio and assumes independence of observations within a cluster (i.e., a group of successive phrase perceptions by an individual listening to a particular piece). The more conservative ‘Robust Model
<italic>p</italic>
-values’ do not; the time transform used in all these models was *log(t).</p>
</fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="pone.0167643.t003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0167643.t003</object-id>
<label>Table 3</label>
<caption>
<title>Description of predictors used in cox hazard models of perceived phrases.</title>
</caption>
<alternatives>
<graphic id="pone.0167643.t003g" xlink:href="pone.0167643.t003"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Parent Predictor</th>
<th align="left" rowspan="1" colspan="1">Description and Potential Range of Values</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens</td>
<td align="left" rowspan="1" colspan="1">Mean acoustic intensity (dB SPL)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf</td>
<td align="left" rowspan="1" colspan="1">Mean spectral flatness (Wiener Entropy)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
rhythdens</td>
<td align="left" rowspan="1" colspan="1">Mean rhythmic density (number of onset events per 500 ms)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
spars</td>
<td align="left" rowspan="1" colspan="1">Mean sparsity (reciprocal of the number of onset events per 500 ms)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specc</td>
<td align="left" rowspan="1" colspan="1">Mean spectral centroid</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specsp</td>
<td align="left" rowspan="1" colspan="1">Mean spectral spread</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specflx</td>
<td align="left" rowspan="1" colspan="1">Mean spectral flux</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
rough</td>
<td align="left" rowspan="1" colspan="1">Mean roughness</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
inharm</td>
<td align="left" rowspan="1" colspan="1">Mean inharmonicity</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(x)</td>
<td align="left" rowspan="1" colspan="1">Time transform of the predictor (x)</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="t003fn001">
<p>
<bold>Note.</bold>
Models included additional versions of each ‘parent’ predictor above: mean change (
<italic>M</italic>
chg) and mean absolute change (
<italic>M</italic>
abschg).</p>
</fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="pone.0167643.t004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0167643.t004</object-id>
<label>Table 4</label>
<caption>
<title>Predictors in the selected cox hazard models of perceived phrases using the ‘whole-phrase’ global approach.</title>
</caption>
<alternatives>
<graphic id="pone.0167643.t004g" xlink:href="pone.0167643.t004"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Stimulus</th>
<th align="left" rowspan="1" colspan="1">Predictor</th>
<th align="center" rowspan="1" colspan="1">Coefficient</th>
<th align="center" rowspan="1" colspan="1">Robust
<italic>SE</italic>
of Coefficient</th>
<th align="center" rowspan="1" colspan="1">Coefficient
<italic>p-</italic>
value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="4" colspan="1">BBC SoundFX</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg specf</td>
<td align="char" char="." rowspan="1" colspan="1">34.56</td>
<td align="char" char="." rowspan="1" colspan="1">9.49</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">16.32</td>
<td align="char" char="." rowspan="1" colspan="1">4.82</td>
<td align="char" char="." rowspan="1" colspan="1">.001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens:
<italic>M</italic>
specf</td>
<td align="char" char="." rowspan="1" colspan="1">2.93</td>
<td align="char" char="." rowspan="1" colspan="1">.89</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
specf)</td>
<td align="char" char="." rowspan="1" colspan="1">-15.74</td>
<td align="char" char="." rowspan="1" colspan="1">4.82</td>
<td align="char" char="." rowspan="1" colspan="1">.001</td>
</tr>
<tr>
<td align="left" rowspan="6" colspan="1">Ng & Dean</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg specf</td>
<td align="char" char="." rowspan="1" colspan="1">-221.10</td>
<td align="char" char="." rowspan="1" colspan="1">83.20</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">-1.36</td>
<td align="char" char="." rowspan="1" colspan="1">.26</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg intens</td>
<td align="char" char="." rowspan="1" colspan="1">54.60</td>
<td align="char" char="." rowspan="1" colspan="1">21.70</td>
<td align="char" char="." rowspan="1" colspan="1">.011</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens:
<italic>M</italic>
specf *</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">>.05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
abschg specf)</td>
<td align="char" char="." rowspan="1" colspan="1">21.00</td>
<td align="char" char="." rowspan="1" colspan="1">7.78</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
abschg intens)</td>
<td align="char" char="." rowspan="1" colspan="1">-4.91</td>
<td align="char" char="." rowspan="1" colspan="1">2.04</td>
<td align="char" char="." rowspan="1" colspan="1">< .05</td>
</tr>
<tr>
<td align="left" rowspan="6" colspan="1">Wishart</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf</td>
<td align="char" char="." rowspan="1" colspan="1">16.61</td>
<td align="char" char="." rowspan="1" colspan="1">4.82</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg specf</td>
<td align="char" char="." rowspan="1" colspan="1">1.33</td>
<td align="char" char="." rowspan="1" colspan="1">.40</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg intens</td>
<td align="char" char="." rowspan="1" colspan="1">8.83</td>
<td align="char" char="." rowspan="1" colspan="1">2.45</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf:
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">-.03</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">.001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
specf)*</td>
<td align="char" char="." rowspan="1" colspan="1">-1.03</td>
<td align="char" char="." rowspan="1" colspan="1">.53</td>
<td align="char" char="." rowspan="1" colspan="1">.051</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
intens)</td>
<td align="char" char="." rowspan="1" colspan="1">-.71</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">.001</td>
</tr>
<tr>
<td align="left" rowspan="4" colspan="1">Eno</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf</td>
<td align="char" char="." rowspan="1" colspan="1">-140.30</td>
<td align="char" char="." rowspan="1" colspan="1">25.19</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">-20.39</td>
<td align="char" char="." rowspan="1" colspan="1">4.46</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
specf)</td>
<td align="char" char="." rowspan="1" colspan="1">12.26</td>
<td align="char" char="." rowspan="1" colspan="1">2.24</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
intens)</td>
<td align="char" char="." rowspan="1" colspan="1">1.79</td>
<td align="char" char="." rowspan="1" colspan="1">.39</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="9" colspan="1">Xenakis</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf</td>
<td align="char" char="." rowspan="1" colspan="1">-204.90</td>
<td align="char" char="." rowspan="1" colspan="1">39.24</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg specf</td>
<td align="char" char="." rowspan="1" colspan="1">6.10</td>
<td align="char" char="." rowspan="1" colspan="1">1.25</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">-19.84</td>
<td align="char" char="." rowspan="1" colspan="1">2.16</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg intens</td>
<td align="char" char="." rowspan="1" colspan="1">.51</td>
<td align="char" char="." rowspan="1" colspan="1">.10</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg intens</td>
<td align="char" char="." rowspan="1" colspan="1">-.93</td>
<td align="char" char="." rowspan="1" colspan="1">.16</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf:
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">-.12</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg specf:
<italic>M</italic>
chg intens</td>
<td align="char" char="." rowspan="1" colspan="1">-1.55</td>
<td align="char" char="." rowspan="1" colspan="1">.32</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
specf)</td>
<td align="char" char="." rowspan="1" colspan="1">18.48</td>
<td align="char" char="." rowspan="1" colspan="1">3.50</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
intens)</td>
<td align="char" char="." rowspan="1" colspan="1">1.62</td>
<td align="char" char="." rowspan="1" colspan="1">.18</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="7" colspan="1">Beethoven</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg rhythdens</td>
<td align="char" char="." rowspan="1" colspan="1">1.97</td>
<td align="char" char="." rowspan="1" colspan="1">.68</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg rhythdens</td>
<td align="char" char="." rowspan="1" colspan="1">44.10</td>
<td align="char" char="." rowspan="1" colspan="1">8.60</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
spars</td>
<td align="char" char="." rowspan="1" colspan="1">5.61</td>
<td align="char" char="." rowspan="1" colspan="1">1.66</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg spars</td>
<td align="char" char="." rowspan="1" colspan="1">11.43</td>
<td align="char" char="." rowspan="1" colspan="1">2.61</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
asbchg spars</td>
<td align="char" char="." rowspan="1" colspan="1">-8.12</td>
<td align="char" char="." rowspan="1" colspan="1">2.81</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf:
<italic>M</italic>
rhythdens</td>
<td align="char" char="." rowspan="1" colspan="1">-.47</td>
<td align="char" char="." rowspan="1" colspan="1">.22</td>
<td align="char" char="." rowspan="1" colspan="1">< .05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
abschg rhythdens)</td>
<td align="char" char="." rowspan="1" colspan="1">-.06</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="t004fn001">
<p>
<bold>Note.</bold>
Asterisked (*) predictors are individually non-significant but nevertheless required in the selected model. A colon placed between the two predictors denotes an interaction between them;
<italic>p-</italic>
values rounded to three decimal places. The time transform used in all these models was *log(t).</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>For sound-based items, selected models included acoustic predictors based on intensity, spectral flatness, and their interaction (with the sole exception of Eno’s ‘
<italic>Francisco’</italic>
, where no interaction was observed). These results suggest that both intensity and spectral flatness are perceptually important in listeners' processes of identifying musical phrases. Furthermore, the descriptions and model interactions indicate that listeners vary in the degree to which these two acoustic features are considered important at any moment, and they may well conflate the two at times. The selected models included logarithmic time-dependency components (as discussed in
<xref ref-type="sec" rid="sec006">methods</xref>
; abbreviated ‘tt’ in
<xref ref-type="sec" rid="sec011">Results</xref>
Tables) for intensity and spectral flatness.</p>
<p>As expected, the note-based Beethoven
<italic>‘Waldstein’</italic>
stimulus was modeled in different ways to the other five sound-based stimuli. The tempo of the piece is mostly 120 crotchets (1/4 notes) per minute; therefore a ‘rest’ of one crotchet might occupy one 500 ms window (which would then have a rhythm count of zero). As can be seen in
<xref ref-type="table" rid="pone.0167643.t004">Table 4</xref>
, spectral flatness but not intensity was required for the selected model, and parameters specific to the rhythmic structure of the piece were dominant (seemingly replacing the intensity profile), even though the overall model fit was still relatively poor (
<italic>R</italic>
<sup>2</sup>
= .30). When listening to the Beethoven
<italic>‘Waldstein’</italic>
stimulus, one can identify intermittent gaps in the activity of the music, where the generally ongoing quaver or semiquaver (i.e., 1/8 or 1/16 notes) note sequences rest, often at points of cadence. These are analogous to the pauses between spoken sentences or turn-taking between multiple speakers in conversation [
<xref rid="pone.0167643.ref059" ref-type="bibr">59</xref>
]. The role of rhythm in the Beethoven model thus supports the decision to instruct listeners to consider such analogies when completing the experiment.</p>
<p>Given that most models showed both intensity and spectral flatness to be significant predictors, we undertook some models where the series of perceived phrases categorised by intensity and timbre were considered separately. Two stimuli were chosen for intensity and timbre: BBC ‘
<italic>Wind’</italic>
(comprising the largest number of intensity phrases) and Wishart’s
<italic>‘Red Bird’</italic>
(comprising the largest number of timbre phrases). A corresponding analysis of rhythmic phrases for the only relevant case, the Beethoven ‘
<italic>Waldstein’</italic>
stimulus, was also made. Suffice to say that the optimum models were not improved or changed in form when categories of phrase responses were analysed as separate streams rather than one continuous stream.</p>
<p>Since the Beethoven
<italic>‘Waldstein’</italic>
model was rather poor, and intensity was not a predictor, we assessed whether rhythmic gaps might have a special role in perception of its phrases. A binary variable, where a gap is coded 1, and ongoing rhythmic events 0, was too impoverished a representation to contribute to the model (there were only 14 such gaps, concentrated in four regions in the context of a total of 292 x 500 ms windows). As a result, we developed a 'sparsity' measure that was the reciprocal of the number of note or chord onsets per 500 ms sample. All sparsity values derived this way for samples containing events were between 0–1, and we arbitrarily re-coded gaps (i.e., time slices with no events) as 2 (instead of infinity).
<xref ref-type="table" rid="pone.0167643.t004">Table 4</xref>
shows that this sparsity measure was a significant predictor in our model of the Beethoven
<italic>‘Waldstein’</italic>
stimulus, alongside the previous parameters specific to the rhythmic structure. This suggests that as hypothesized, perceptions of sparsity ('pauses' or gaps in activity) may have an impact distinct from perceptions of rhythmic density. These models are intuitively comprehensible and may indicate a form of 'regime switching' (or thresholding) whereby the sparsity component is important in the highly sparse parts, but not otherwise. However, this model goes against several principles of parsimony; for example, it uses a data stream twice (counts of absolute rhythm). The complete model had an
<italic>R</italic>
<sup>2</sup>
value of .30; without the sparsity measure the
<italic>R</italic>
<sup>2</sup>
value was .18; without absolute counts of rhythmic events, the
<italic>R</italic>
<sup>2</sup>
value was .06. Thus, both rhythmic density and sparsity contributed to hazard modeling (i.e., the likelihood of a perceptual event) of the Beethoven
<italic>‘Waldstein’</italic>
stimulus. Unsuccessful attempts were made to use the log or CoxBox transform to unify the density and sparsity measures into a single predictor.</p>
</sec>
<sec id="sec014">
<title>3.3. Impact of terminal acoustic features on perceived musical phrases</title>
<p>In further pre-planned analyses, we replaced the 'global' acoustic data obtained from the whole of each phrase with data obtained only over its last five seconds; the ‘terminal’ portion of each perceived phrase. This tested whether the acoustic factors within the terminal portion play particularly important roles for phrase perception, above and beyond the acoustic information across the complete duration of perceived phrases. We hypothesized at the outset that the terminal portion of each phrase would have a relatively strong impact on phrase perception when phrase durations extend beyond five seconds. This temporal window was chosen on the basis of prior evidence from time series models of several continuously perceived aspects of pieces such as those presented herein (notably, perceived musical ‘change’ and affective arousal and valence) [
<xref rid="pone.0167643.ref016" ref-type="bibr">16</xref>
,
<xref rid="pone.0167643.ref019" ref-type="bibr">19</xref>
<xref rid="pone.0167643.ref022" ref-type="bibr">22</xref>
,
<xref rid="pone.0167643.ref026" ref-type="bibr">26</xref>
<xref rid="pone.0167643.ref028" ref-type="bibr">28</xref>
,
<xref rid="pone.0167643.ref054" ref-type="bibr">54</xref>
]. In such models lags of predictors up to five seconds in duration and beyond are required for optimal models. Furthermore, research implementing Markov chain models of musical structure and perception report similar results [
<xref rid="pone.0167643.ref060" ref-type="bibr">60</xref>
]. Thus we assessed whether using the same acoustic predictors measured solely over the last five seconds of each perceived phrase could result in models that were as good or even better than those obtained from the whole-phrase trajectory. Given that the mean duration of perceived phrases was 15.05 seconds across all participants and pieces, the terminal five seconds of each phrase corresponds to roughly the final third.</p>
<p>Results from this analysis confirm that a combination of intensity and spectral flatness predictors, together with time dependence, remained in each model of the sound-based stimuli. The precise form of the models changed somewhat, but given the retention of the acoustic predictors, we compare
<italic>R</italic>
<sup>2</sup>
values in
<xref ref-type="table" rid="pone.0167643.t005">Table 5</xref>
between the selected models that include acoustic information over each phrase’s entirety (the ‘global’ approach) with the selected models that include acoustic information solely from the final five-seconds of each phrase (the ‘terminal’ approach). As can be seen in
<xref ref-type="table" rid="pone.0167643.t005">Table 5</xref>
, in half the cases acoustic information over the final five seconds of a phrase provided a model that was predictively slightly better than those using the whole-phrase acoustic data. In the other half, the models were slightly inferior. As above for global models of the Beethoven stimulus, rhythm and sparsity predictors remained significant for its 'terminal' model, together with spectral flatness. The results suggest that the terminal portion may sometimes have particular influence, but improvements due to the terminal portion of acoustic information were not uniform.</p>
<table-wrap id="pone.0167643.t005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0167643.t005</object-id>
<label>Table 5</label>
<caption>
<title>Comparison of
<italic>R</italic>
<sup>2</sup>
values between whole-phrase ‘global’ models and ‘terminal portion’ models.</title>
</caption>
<alternatives>
<graphic id="pone.0167643.t005g" xlink:href="pone.0167643.t005"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Stimulus</th>
<th align="center" rowspan="1" colspan="1">Whole-Phrase ‘Global’ Models</th>
<th align="center" rowspan="1" colspan="1">‘Terminal Portion’ Models</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">BBC SoundFX:
<italic>Wind</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.51</td>
<td align="char" char="." rowspan="1" colspan="1">.52</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Ng & Dean:
<italic>LowHz</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.42</td>
<td align="char" char="." rowspan="1" colspan="1">.52</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Eno:
<italic>Francisco</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.31</td>
<td align="char" char="." rowspan="1" colspan="1">.28</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wishart:
<italic>Red Bird</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.38</td>
<td align="char" char="." rowspan="1" colspan="1">.42</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Beethoven:
<italic>Waldstein</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.30</td>
<td align="char" char="." rowspan="1" colspan="1">.28</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Xenakis:
<italic>Metastaseis</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.56</td>
<td align="char" char="." rowspan="1" colspan="1">.47</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
</sec>
<sec id="sec015">
<title>3.4. Models including additional spectral parameters</title>
<p>As described in the Methods section above, a rational selection of spectral parameters was measured to supplement the routinely used spectral flatness, as potential predictors of perceived timbre and its impact on phrase perception. These were spectral centroid, flux and spread, together with inharmonicity and roughness. A standard procedure (see
<xref ref-type="sec" rid="sec006">Method</xref>
section) was used to select an improved parsimonious model.
<xref ref-type="table" rid="pone.0167643.t006">Table 6</xref>
summarizes the results and
<xref ref-type="table" rid="pone.0167643.t007">Table 7</xref>
shows the specific predictor coefficients in each model. Overall,
<italic>R</italic>
<sup>2</sup>
values were improved noticeably for all five sound-based stimuli; the improvement for the note-based piece was trivial. Spectral flatness and intensity were retained within the models of the five sound-based stimuli; spectral flatness and rhythmic density were retained in that of the note-based ‘
<italic>Waldstein</italic>
’ Sonata (intensity was not included in the earlier model either).</p>
<table-wrap id="pone.0167643.t006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0167643.t006</object-id>
<label>Table 6</label>
<caption>
<title>Summary of selected cox hazard models of perceived phrases using the whole-phrase global approach and additional spectral parameters.</title>
</caption>
<alternatives>
<graphic id="pone.0167643.t006g" xlink:href="pone.0167643.t006"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Stimulus</th>
<th align="center" rowspan="1" colspan="1">
<italic>R</italic>
<sup>2</sup>
</th>
<th align="center" rowspan="1" colspan="1">Likelihood Ratio</th>
<th align="center" rowspan="1" colspan="1">Model
<italic>p-</italic>
value</th>
<th align="center" rowspan="1" colspan="1">Robust Model
<italic>p-</italic>
value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">BBC SoundFX:
<italic>Wind</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.67</td>
<td align="char" char="." rowspan="1" colspan="1">103.20</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.030</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Ng & Dean:
<italic>LowHz</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.57</td>
<td align="char" char="." rowspan="1" colspan="1">84.15</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.103</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Eno:
<italic>Francisco</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.50</td>
<td align="char" char="." rowspan="1" colspan="1">72.12</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.013</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Wishart:
<italic>Red Bird</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.60</td>
<td align="char" char="." rowspan="1" colspan="1">134.00</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.032</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Beethoven:
<italic>Waldstein</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.32</td>
<td align="char" char="." rowspan="1" colspan="1">48.32</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">.021</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Xenakis:
<italic>Metastaseis</italic>
</td>
<td align="char" char="." rowspan="1" colspan="1">.64</td>
<td align="char" char="." rowspan="1" colspan="1">130.00</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
<td align="char" char="." rowspan="1" colspan="1">1.000</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="t006fn001">
<p>
<bold>Note.</bold>
‘Model
<italic>p-</italic>
value’ refers to the
<italic>p-</italic>
value based on the Likelihood Ratio and assumes independence of observations within a cluster (i.e., a group of successive phrase perceptions by an individual listening to a particular piece). The more conservative ‘Robust Model
<italic>p</italic>
-values’ do not; the time transform used in all these models was *log(t).</p>
</fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="pone.0167643.t007" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0167643.t007</object-id>
<label>Table 7</label>
<caption>
<title>Predictors in the selected cox hazard models of perceived phrases using the global approach and additional spectral parameters.</title>
</caption>
<alternatives>
<graphic id="pone.0167643.t007g" xlink:href="pone.0167643.t007"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Stimulus</th>
<th align="left" rowspan="1" colspan="1">Predictor</th>
<th align="center" rowspan="1" colspan="1">Coefficient</th>
<th align="center" rowspan="1" colspan="1">Robust
<italic>SE</italic>
of Coefficient</th>
<th align="center" rowspan="1" colspan="1">Coefficient
<italic>p-</italic>
value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="9" colspan="1">BBC SoundFX</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">4.32</td>
<td align="char" char="." rowspan="1" colspan="1">.62</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg specc</td>
<td align="char" char="." rowspan="1" colspan="1">-.06</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specsp</td>
<td align="char" char="." rowspan="1" colspan="1">-.02</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg specflx</td>
<td align="char" char="." rowspan="1" colspan="1">349.70</td>
<td align="char" char="." rowspan="1" colspan="1">41.88</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
rough</td>
<td align="char" char="." rowspan="1" colspan="1">2956.00</td>
<td align="char" char="." rowspan="1" colspan="1">433.10</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
inharm</td>
<td align="char" char="." rowspan="1" colspan="1">-47.47</td>
<td align="char" char="." rowspan="1" colspan="1">17.73</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg inharm</td>
<td align="char" char="." rowspan="1" colspan="1">101.80</td>
<td align="char" char="." rowspan="1" colspan="1">29.02</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens:
<italic>M</italic>
specf</td>
<td align="char" char="." rowspan="1" colspan="1">.90</td>
<td align="char" char="." rowspan="1" colspan="1">.11</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
specf)</td>
<td align="char" char="." rowspan="1" colspan="1">-.01</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="10" colspan="1">Ng & Dean</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">-11.57</td>
<td align="char" char="." rowspan="1" colspan="1">3.56</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf</td>
<td align="char" char="." rowspan="1" colspan="1">216.00</td>
<td align="char" char="." rowspan="1" colspan="1">72.95</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg intens</td>
<td align="char" char="." rowspan="1" colspan="1">3.47</td>
<td align="char" char="." rowspan="1" colspan="1">.78</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specc</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specsp</td>
<td align="char" char="." rowspan="1" colspan="1">-.01</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg rough*</td>
<td align="char" char="." rowspan="1" colspan="1">-269.40</td>
<td align="char" char="." rowspan="1" colspan="1">154.60</td>
<td align="char" char="." rowspan="1" colspan="1">>.05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
inharm</td>
<td align="char" char="." rowspan="1" colspan="1">44.74</td>
<td align="char" char="." rowspan="1" colspan="1">13.50</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg inharm</td>
<td align="char" char="." rowspan="1" colspan="1">-35.11</td>
<td align="char" char="." rowspan="1" colspan="1">15.02</td>
<td align="char" char="." rowspan="1" colspan="1">< .05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens:
<italic>M</italic>
specf</td>
<td align="char" char="." rowspan="1" colspan="1">-2.99</td>
<td align="char" char="." rowspan="1" colspan="1">1.03</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
abschg intens)</td>
<td align="char" char="." rowspan="1" colspan="1">-.01</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="10" colspan="1">Wishart</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf</td>
<td align="char" char="." rowspan="1" colspan="1">72.37</td>
<td align="char" char="." rowspan="1" colspan="1">18.12</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specc</td>
<td align="char" char="." rowspan="1" colspan="1">-.03</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg specc</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg specc</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
rough</td>
<td align="char" char="." rowspan="1" colspan="1">332.00</td>
<td align="char" char="." rowspan="1" colspan="1">58.50</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
inharm</td>
<td align="char" char="." rowspan="1" colspan="1">63.69</td>
<td align="char" char="." rowspan="1" colspan="1">18.58</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg inharm</td>
<td align="char" char="." rowspan="1" colspan="1">-149.50</td>
<td align="char" char="." rowspan="1" colspan="1">29.11</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg inharm</td>
<td align="char" char="." rowspan="1" colspan="1">78.96</td>
<td align="char" char="." rowspan="1" colspan="1">25.11</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf:
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">-.07</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
specf)</td>
<td align="char" char="." rowspan="1" colspan="1">-5.52</td>
<td align="char" char="." rowspan="1" colspan="1">1.40</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="8" colspan="1">Eno</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">-25.05</td>
<td align="char" char="." rowspan="1" colspan="1">7.62</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf</td>
<td align="char" char="." rowspan="1" colspan="1">-161.40</td>
<td align="char" char="." rowspan="1" colspan="1">37.13</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specc</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg specflx</td>
<td align="char" char="." rowspan="1" colspan="1">-24.91</td>
<td align="char" char="." rowspan="1" colspan="1">8.25</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
rough</td>
<td align="char" char="." rowspan="1" colspan="1">-1893.00</td>
<td align="char" char="." rowspan="1" colspan="1">370.20</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg rough</td>
<td align="char" char="." rowspan="1" colspan="1">-1827.00</td>
<td align="char" char="." rowspan="1" colspan="1">401.00</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
specf)</td>
<td align="char" char="." rowspan="1" colspan="1">13.99</td>
<td align="char" char="." rowspan="1" colspan="1">3.32</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
intens)</td>
<td align="char" char="." rowspan="1" colspan="1">2.17</td>
<td align="char" char="." rowspan="1" colspan="1">.68</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="9" colspan="1">Xenakis</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens</td>
<td align="char" char="." rowspan="1" colspan="1">-3.35</td>
<td align="char" char="." rowspan="1" colspan="1">.36</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg intens</td>
<td align="char" char="." rowspan="1" colspan="1">.21</td>
<td align="char" char="." rowspan="1" colspan="1">.07</td>
<td align="char" char="." rowspan="1" colspan="1">< .01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg intens</td>
<td align="char" char="." rowspan="1" colspan="1">-1.02</td>
<td align="char" char="." rowspan="1" colspan="1">.13</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg specc</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specsp</td>
<td align="char" char="." rowspan="1" colspan="1">-.03</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
intens:
<italic>M</italic>
specf</td>
<td align="char" char="." rowspan="1" colspan="1">-.16</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg intens:
<italic>M</italic>
chg specf</td>
<td align="char" char="." rowspan="1" colspan="1">-1.13</td>
<td align="char" char="." rowspan="1" colspan="1">.25</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(mspecsp)</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(mintens)</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="8" colspan="1">Beethoven</td>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg rhythdens</td>
<td align="char" char="." rowspan="1" colspan="1">37.89</td>
<td align="char" char="." rowspan="1" colspan="1">7.23</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
spars</td>
<td align="char" char="." rowspan="1" colspan="1">5.28</td>
<td align="char" char="." rowspan="1" colspan="1">1.23</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
chg spars</td>
<td align="char" char="." rowspan="1" colspan="1">6.92</td>
<td align="char" char="." rowspan="1" colspan="1">1.92</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
abschg spars</td>
<td align="char" char="." rowspan="1" colspan="1">-7.33</td>
<td align="char" char="." rowspan="1" colspan="1">1.57</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specsp</td>
<td align="char" char="." rowspan="1" colspan="1">-.01</td>
<td align="char" char="." rowspan="1" colspan="1">.01</td>
<td align="char" char="." rowspan="1" colspan="1">< .05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
inharm</td>
<td align="char" char="." rowspan="1" colspan="1">-28.15</td>
<td align="char" char="." rowspan="1" colspan="1">2.09</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>M</italic>
specf:
<italic>M</italic>
rhythdens</td>
<td align="char" char="." rowspan="1" colspan="1">-.05</td>
<td align="char" char="." rowspan="1" colspan="1">.02</td>
<td align="char" char="." rowspan="1" colspan="1">< .05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">tt(
<italic>M</italic>
abschg rhythdens)</td>
<td align="char" char="." rowspan="1" colspan="1">-3.21</td>
<td align="char" char="." rowspan="1" colspan="1">.63</td>
<td align="char" char="." rowspan="1" colspan="1">< .001</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="t007fn001">
<p>
<bold>Note.</bold>
Asterisked (*) predictors are individually non significant but nevertheless required in the selected model. A colon placed between the two predictors denotes an interaction between them;
<italic>p-</italic>
values rounded to three decimal places. The time transform used in all these models was *log(t).</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>The selected model for the note-based
<italic>Waldstein</italic>
sonata also included modest contributions from spectral spread and inharmonicity (but not from spectral centroid). Spectral centroid is not equivalent to pitch, even for a monophonic sound (in that case, centroid is almost always significantly higher than pitch or fundamental frequency, because of higher harmonic and inharmonic partials), unless pitch is above about 4000Hz. With monophonic sounds, spectral centroid commonly changes in the same direction as pitch, but with polyphonic note-based sounds it more closely approximates the average pitch of the component tones. The lack of predictive power of spectral centroid in the Beethoven
<italic>‘Waldstein’</italic>
models thus suggests that the mean pitch was not a major influence. In contrast, the pitch range is more clearly represented in the acoustic spectral spread values, which were a predictor for the Beethoven
<italic>‘Waldstein’</italic>
stimulus, suggesting that pitch range (which varies dramatically) was probably important here for phrase perception. For the sound-based stimuli, spectral centroid (all five stimuli), spectral flux (
<italic>‘BBC Wind’</italic>
and
<italic>‘Francisco’</italic>
), spectral spread (
<italic>‘BBC Wind’</italic>
, ‘
<italic>Metastaseis’</italic>
, and
<italic>‘LowHz’</italic>
), inharmonicity (
<italic>‘BBC Wind’</italic>
,
<italic>‘LowHz’</italic>
, and
<italic>‘Red Bird’</italic>
) and roughness (
<italic>‘BBC Wind’</italic>
,
<italic>‘Francisco’</italic>
,
<italic>‘LowHz’</italic>
, and
<italic>‘Red Bird’</italic>
) were additional contributors to the models.</p>
</sec>
</sec>
<sec id="sec016">
<title>4. Discussion</title>
<p>Our mixed methods approach investigated the occurrence and nature of listeners’ perception of musical phrases in sound-based musical stimuli that comprise few of the pitch-related events most commonly found in instrumental note-based music. The range of stimuli also included note-based instrumental music and environmental sound.
<xref ref-type="fig" rid="pone.0167643.g001">Fig 1</xref>
shows that phrase perception was achieved in a coherent manner (i.e., the expected clustered recurrent event process). The qualitative results in
<xref ref-type="table" rid="pone.0167643.t001">Table 1</xref>
and the tables in
<xref ref-type="supplementary-material" rid="pone.0167643.s002">S1 Appendix</xref>
clearly show that for the four stimuli we categorized as ‘sound-based music’, listeners described phrase features most frequently as aspects of timbre (331 phrases in total) and less frequently as aspects of loudness, which we label 'Intensity' as our operational acoustic measure (104 phrases in total). Rhythm was not important for perceived phrasing in these stimuli. The sound-based musical excerpt that involved orchestral instruments (Xenakis’s
<italic>‘Metastaseis’</italic>
) was perceived as comprising slightly more intensity-related than timbre-related phrases, suggesting that instrumental activity and note-like attacks in the piece were perceived more readily here than in the other three. Close listening to all four sound-based music stimuli supports this view. The fifth sound-based stimulus, the environmental (rather than musically composed) sample of a BBC field recording of wind noises, was predominantly perceived in terms of intensity-related phrases (68), with only 24 cases associated primarily with timbre. The fact that intensity was relatively predominant for the Xenakis ‘
<italic>Metastaseis’</italic>
and BBC ‘
<italic>Wind’</italic>
stimuli is again consistent with the overall characteristics of each stimulus, as timbral change in both examples is modest. Indeed, examples of major timbral change are sparse in the Xenakis ‘
<italic>Metastaseis’</italic>
stimulus, even though the music is performed using a wide range of orchestral instruments. When timbral change is apparent in ‘
<italic>Metastaseis’</italic>
, brass and percussion events tend to supervene over the strings. Such changes of instrumentation were noted in at least 30 of the descriptions shown in Table E in
<xref ref-type="supplementary-material" rid="pone.0167643.s002">S1 Appendix</xref>
, and in several cases, both loudness and instrumental changes were mentioned simultaneously. Furthermore, some participants displayed awareness of several individual brass, percussion, and string instruments. At least 11 descriptions referred to an abrupt silence in this extract, where sound drops out briefly: they used words such as 'stop' and 'gone', clearly indicating a cessation of activity (or a 'gap'). A prior detailed analysis of acoustics and affective responses to
<italic>‘Metastaseis’</italic>
[
<xref rid="pone.0167643.ref061" ref-type="bibr">61</xref>
] is consistent with these observations, including the role of silence (which was therefore not investigated further in the present work).</p>
<p>Even more than the Xenakis ‘
<italic>Metastaseis’</italic>
stimulus, the BBC ‘
<italic>Wind’</italic>
stimulus is relatively homogeneous in character (being environmental rather than compositional) and thus, timbre did not play a significant role in determining listeners’ perception of phrasing. The fact that environmental sounds can be systematically segmented is to be expected from the evolutionary considerations of ecological acoustics [
<xref rid="pone.0167643.ref062" ref-type="bibr">62</xref>
<xref rid="pone.0167643.ref064" ref-type="bibr">64</xref>
]. One may speculate that most human auditory segmentation is based on abilities required biologically and subsequently molded by experience of speech. The fact that the average duration of perceived phrases in our study was around 15 seconds, and much greater than the minimum implied by our task demands (~5 seconds), confirms that our listeners were normally dealing with aggregations of short events. There is evidence that word duration is related to information content [
<xref rid="pone.0167643.ref065" ref-type="bibr">65</xref>
], and this may apply also to longer units such as linguistic clauses, sentences and musical phrases, and this relationship may be reflected in the magnitude of change in acoustic parameters.</p>
<p>As hypothesized, the instrumental note-based Beethoven
<italic>‘Waldstein’</italic>
stimulus resulted in dramatically different data when compared to the five sound-based stimuli. Notably, phrases where rhythmic events were described as the key attribute (64) were observed at a greater frequency than those categorized by either timbre (32) or intensity (31). Similarly, while rhythmic phrases comprised more than 50% of those perceived in the Beethoven stimulus, they comprised less than 7% of phrases in any of the sound-based stimuli.</p>
<sec id="sec017">
<title>4.1. Models of phrase perception</title>
<p>The qualitative descriptors supported our intention of modeling the predictive influence of acoustic features upon perception of phrases. We started with simple models involving only single, 'global' measures of timbre (spectral flatness, a feature at the top of the MPEG-7 descriptor tree), intensity, and rhythm, as chosen in our earlier work. The measures were taken with three different values to represent the behavior of each acoustic parameter across the whole duration of each perceived phrase. The three values represented the mean, the mean change, and the mean absolute change in each parameter in the specified window, as previously recommended [
<xref rid="pone.0167643.ref053" ref-type="bibr">53</xref>
] and useful in other contexts [
<xref rid="pone.0167643.ref066" ref-type="bibr">66</xref>
]. Reasonable models were obtained using Cox recurrent event hazard analysis, with
<italic>R</italic>
<sup>2</sup>
values ranging between .30 and .56. For sound-based stimuli, optimum models required contributions from spectral flatness, intensity, and for all but one (Eno’s ‘
<italic>Francisco</italic>
’), their interaction, confirming the importance of both acoustic parameters and their likely perceptual interaction in the determination of phrases. There were clear signs of temporal variation across responses to each sound-based extract, consistent with the idea that these stimuli were unfamiliar, with responses evolving as exposure increased.</p>
<p>As hypothesized, the Beethoven
<italic>‘Waldstein’</italic>
note-based stimulus was modeled very differently, with intensity a redundant predictor; in a sense it was replaced by the measure of rhythmic density and its interaction with spectral flatness. This interaction was the only component by which spectral flatness contributed in models of Beethoven’s
<italic>‘Waldstein’</italic>
, confirming its subordinate role here. Similarly, the coefficient for the time dependence of the absolute change in rhythmic density series was here much smaller than any such coefficients in the sound-based stimuli. This observation is consistent with the greater familiarity of note-based music. Pauses in rhythmic activity, reflected in our sparsity measure, had quite a strong role that was separable from that of rhythmic density, and somewhat like the role of the major ‘gaps’ in the Xenakis
<italic>‘Metastaseis’</italic>
orchestral stimulus [
<xref rid="pone.0167643.ref067" ref-type="bibr">67</xref>
]. Future models of such music may establish a more parsimonious way of modeling both rhythm and sparsity that derive from one measured series only, rather than modeling the two parameters we derived here. It was not our purpose to broach this further, but rather, to focus on timbral aspects instead.</p>
<p>Our models of the isolated intensity, timbre, or rhythm phrase series from the three stimuli in which these features were most important were consistent with the 'whole-phrase' series models. Notably, in separate models of the timbral and of the intensity series, both spectral flatness and intensity remained mutually required predictors, confirming the above observations of their ongoing interaction. In addition, it is quite likely that listeners may have sometimes confused them in their descriptors, just as the tables in
<xref ref-type="supplementary-material" rid="pone.0167643.s002">S1 Appendix</xref>
show that sometimes both timbre and intensity were included in overlapping phrase descriptions by different participants.</p>
<p>We hypothesized that the acoustic features of the final portion of a perceived phrase might be the dominant influence on its perception. As
<xref ref-type="table" rid="pone.0167643.t005">Table 5</xref>
shows, we were able to obtain models using only the data of the last five seconds of each phrase that were comparable to (but not uniformly better than) those that used the whole-phrase data. The models were similar in the range and relative coefficients of the predictors involved, and so the last five seconds were sufficient but not 'dominant'. This may be due to the fact that the phrases were relatively homogeneous, so that the measured parameters of the last five seconds were similar to those of the whole phrase. We return to the practical applications of this observation below.</p>
</sec>
<sec id="sec018">
<title>4.2. Analyzing the role of several spectral descriptors</title>
<p>As indicated in the Introduction, we did not assume that models of dissimilarity between short separated individual sounds (the main form of timbral characterization studied to date) would be immediately applicable to the circumstances of sound-based music, where sonic continuities predominate over separations. Even the modest disparities between key predictors of instrumental sound dissimilarity models [
<xref rid="pone.0167643.ref053" ref-type="bibr">53</xref>
] and those of environmental sounds [
<xref rid="pone.0167643.ref068" ref-type="bibr">68</xref>
,
<xref rid="pone.0167643.ref069" ref-type="bibr">69</xref>
] support the view that some differences are to be expected with our longer continuous sounds, as do the few earlier works on perceived segments in continuous sound pieces described in the Introduction [
<xref rid="pone.0167643.ref026" ref-type="bibr">26</xref>
<xref rid="pone.0167643.ref028" ref-type="bibr">28</xref>
]. We chose a representative selection of spectral descriptors on the basis of their previous utility and relatively low correlation in models of instrumental note-based sound, and applied them to our whole-phrase perception data.</p>
<p>Consistent with their important positions in the MPEG-7 descriptor system, both intensity and spectral flatness were retained within the resultant more complex models of all sound-based extracts, which were all improvements over the simpler models. They were joined variously by spectral centroid, inharmonicity, roughness, spectral spread, and less commonly, spectral flux. This last observation was perhaps the most interesting, since spectral flux has been reported as a significant factor in models of dissimilarity of isolated pairs of sounds, such as those created by musical instruments [
<xref rid="pone.0167643.ref053" ref-type="bibr">53</xref>
], but not environmental sounds [
<xref rid="pone.0167643.ref068" ref-type="bibr">68</xref>
,
<xref rid="pone.0167643.ref069" ref-type="bibr">69</xref>
]. A reasonable interpretation is that spectral flux is relatively high throughout these sound-based extracts (and in natural environmental sounds beyond our example), somewhat as it is in speech. The Beethoven
<italic>‘Waldstein’</italic>
(note-based) model was not substantially improved in fit by the additional choice of spectral predictors, but did introduce spectral spread and inharmonicity. This result presumably reflects the variation in keyboard ranges over which the extract operates, and the fact that the piano is uniquely characterized by its inharmonic components. Thus again, this note-based piece revealed influences of spectral components in addition to rhythm. We conclude that the selected range of added spectral predictors were broadly appropriate because all made contributions in some cases. Therefore, they are suitable for use in future studies of sound-based music.</p>
<p>One limitation of our study is that we had no way of separating the spectral fluxes in different simultaneous streams of the music. While our models were moderately good, they might well be improved were such precise computational stream segregation readily feasible, as it no doubt will be in the near future. However, our concern was with ecologically valid musical (and environmental) extracts, and it proved adequately successful.</p>
</sec>
<sec id="sec019">
<title>4.3. Relations between perceptual segmentation of speech and music</title>
<p>The conclusions just discussed are consistent with those from studies of perception of speech clauses and sentences, the analogue of musical phrases. This suggests that the categorization of languages into stress- and tone-based may have a deeper relation with that of note- and sound-based music than just analogy. This possibility has not been noted previously [
<xref rid="pone.0167643.ref070" ref-type="bibr">70</xref>
], presumably because of the paucity of studies on sound-based music. For example, the relationship may suggest that rhythmic events can be delineated by the recurrence of purely timbral changes (even without intensity changes), and listeners can progressively learn to identify them. More specifically, those timbral changes may not require either pitch change or change in spectral centroid (perhaps a more nuanced single dimension representation of the multidimensional frequency spectrum of a sound than pitch). However, we should not speculate too far on this possible relationship in the absence of relevant empirical data.</p>
<p>Returning to the broader relation between speech and music perception, we note that ‘pauses’ delineate speech clauses and sentences, and phrases in sound-based music, not only when there is a gap with a virtual absence of sound, but also when the pause is constituted by a more subtle slowing of change of particular spectral parameters. Perceptual measures of continuous change in music from which intensity change has been removed will be informative in this respect, and these can be coupled with studies of perceived phrase segmentation in the same conditions.</p>
<p>The motor theory of speech cognition has long held that auditory and visual cues to speech comprehension are accompanied by a motor appraisal, in which the physical processes necessary for generating the observed sound are assessed by the listener and subsequently contribute to the ultimate cognitive categorization of the speech sound and its intelligibility [
<xref rid="pone.0167643.ref071" ref-type="bibr">71</xref>
]. Extensive criticism of this theory exists, partly on the grounds of unnecessary complexity [
<xref rid="pone.0167643.ref072" ref-type="bibr">72</xref>
]. The theory bears consideration in relation to music in light of Godøy's theories on chunking of music through co-articulation [
<xref rid="pone.0167643.ref073" ref-type="bibr">73</xref>
,
<xref rid="pone.0167643.ref074" ref-type="bibr">74</xref>
]. Here, a chunk is the fusion of brief physical generative actions and sounds into larger ones. These superordinate units are considered to have a mean duration of about three seconds (shorter than most we encouraged and observed in the present study) and to result from either 'exogenous' or 'endogenous' factors. Exogenous factors, considered consequential on physical actions of a sound-producing performer, provide clear 'discontinuities' in the signal and induce the perception of start and end points. This points to instrumental notes and the correlations between performer movement and note onsets and offsets; features largely absent in all our extracts except the note-based Beethoven
<italic>‘Waldstein’</italic>
piano stimulus (it was for this reason that we presented the two instrumental works, Beethoven and Xenakis, at the end of our stimulus presentation sequence). Endogenous factors are 'top-down' projections by the perceiver that result in perceptions that are less consistent between individuals. While co-articulation for Godøy is the co-temporality of the performer's actions and musical chunks, he also notes the use of the same term in speech perception to indicate the smearing of phonemic sound components and in some cases, their alteration, which derives from the succession of physical movements required to generate a particular sequence of phonemes.</p>
<p>The so-called 'resonant' perception of speech components [
<xref rid="pone.0167643.ref075" ref-type="bibr">75</xref>
] involves integration of chunks, including backwards integration in time. In other words, aspects of chunks are kept in memory, sometimes beyond working memory, allowing holistic perception. In music [
<xref rid="pone.0167643.ref076" ref-type="bibr">76</xref>
], this might be the 'temporal anamorphosis' proposed by influential sound-based composer-theoretician Pierre Schaeffer [
<xref rid="pone.0167643.ref077" ref-type="bibr">77</xref>
] and corresponds well with the phrases we describe in this paper. While the phrases in most sound-based music do not derive from physical aspects of performance, they may be constructed compositionally so that they sound analogous to such performance, perhaps because of the connections we have raised with speech or those with larger evolutionary and environmental acoustic factors. Our evidence for the role of perceived agency in perception of affect in sound-based music suggests this connection may be present but modest [
<xref rid="pone.0167643.ref078" ref-type="bibr">78</xref>
].</p>
</sec>
<sec id="sec020">
<title>4.4. Phrases in note-based music</title>
<p>There is a large literature on segmentation of note-based, usually monophonic (single stream) proto-musical experimental sequences. A suitable basis for considering this in our present context is the recent in depth introduction and study by Hutchison et al. [
<xref rid="pone.0167643.ref079" ref-type="bibr">79</xref>
]. These authors went beyond proto-stimuli to include excerpts of a compilation of folk songs, although most were still monophonic. The study primarily compared the predictive accuracy of the perceived phrase structure model (PPS), and the generative structural grammar model (GSGM) of segmentation; that is, the degree to which they could predict users' perceptions of segmentation. It also considered an information theoretic model, the information dynamics of music (IDyOM). Interestingly, only marginal effects of participants’ musical expertise were observed on model prediction accuracies. This suggests that with note-based music either there is no effect of degree of familiarity, or participants had reached a ceiling for such effects. We consider the latter a more likely interpretation.</p>
<p>As formulated, these models are almost entirely focused on 'notes', though the GSGM model does involve one grouping rule that considers five categories of 'change in the music' that include dynamics (loudness) and timbre/instrumentation. Thus the models are largely inapplicable to sound-based music in their initial formulation, although they (and most readily IDyOM) could be rephrased in terms of other musical parameters (such as an MFCC description of a series of timbral events). One notable feature emphasized directly in the PPS model and applicable to sound-based music and our results, is its Rule 1: 'Gap'. PPS suggests that large inter-onset intervals and most importantly, large offset-to-onset intervals tend to result in phrase boundary perception. This is entirely consistent with the speech literature and the observations and interpretations in the present paper. IDyOM's analysis also takes account of such gaps.</p>
</sec>
<sec id="sec021">
<title>4.5. Potential implications</title>
<p>The speech perception literature has at least one more implication. How does statistical learning of relationships between segments proceed? Learned words can function as 'anchors' facilitating the learning of others [
<xref rid="pone.0167643.ref080" ref-type="bibr">80</xref>
]. More importantly, if the start and finish of a linguistic phrase is identified (on the basis of the ‘bootstrapping’ processes discussed above), then it may be possible for a learner to pick off units that then help define others. Johnson [
<xref rid="pone.0167643.ref040" ref-type="bibr">40</xref>
] gives the following simple example: an English language learner hears ‘Look. Look here. Here is the cat.’ ‘Look’, presented in isolation, is identified as a word. Then it can be subtracted from 'Look here', revealing that 'here' is probably a word too. The process can continue according to the language elements presented.</p>
<p>Our interpretations have application to additional ‘real-world’ contexts in which the formation of cognized musical phrases occur. The first application is to provide data informing electroacoustic and algorithmic music composers who commonly implement spectral transformations by DSP techniques that are salient for perception of musical phrases in sound-based music. This is important because ultimately, sequences of phrases (and recognition of phrase similarities) are the elements of music that contribute to its structure and affective qualities. It is perhaps not surprising (but nevertheless useful) to realize that even in continuous sound environments, spectral flatness and spectral centroid are both important (as found earlier with a sound-based piece; see [
<xref rid="pone.0167643.ref020" ref-type="bibr">20</xref>
]), together with inharmonicity, roughness, and spectral spread. The relative lack of influence from spectral flux in the present study does not of course rule out its importance, but rather suggests that it is a descriptor whose impact may be driven more directly by component factors such as changes in the individual spectral factors discussed above. This interpretation seems consistent with data using synthetic tones [
<xref rid="pone.0167643.ref081" ref-type="bibr">81</xref>
]. Furthermore, the powerful role of temporal changes in acoustic intensity for music perception (e.g., subjective and physiological affective responses [
<xref rid="pone.0167643.ref022" ref-type="bibr">22</xref>
,
<xref rid="pone.0167643.ref054" ref-type="bibr">54</xref>
,
<xref rid="pone.0167643.ref082" ref-type="bibr">82</xref>
]) is also well supported by the present data.</p>
<p>The second application of the work is specific to the question: how might one best consider timbre and its relation to affect in the context of music information retrieval (MIR) approaches to music recommender systems? Broadly, MIR is the large discipline in which computational information about music (such as acoustic or structural information) is used to identify similarities, relationships, and even genres, largely using analytical and machine learning techniques [
<xref rid="pone.0167643.ref034" ref-type="bibr">34</xref>
]. Music recommender systems apply such information together with social use and preference data taken with demographics and individual use histories, primarily to recommend new music to listen to or purchase (such as with Amazon, iTunes, Spotify, or Shazam). We are seeking in other work to extend 'affective'-based music recommender systems (which recommend pieces of music that bear affective similarity to other pieces experienced and liked by a user) by exploiting information from short segments of listeners’ continuous affective responses and apply it to the exploration of unfamiliar music libraries (rather than commercial libraries driven by promotion and social usage).</p>
<p>To this end, we require a process that is able to quickly and accurately model the features of musical timbre that influence perceptions of a particular user, especially in the context of the affective profile from a segment of music they have heard. We envisage that modeling such data on the basis of temporal data chunks of sufficient duration to predicate phrase perception may at least be competitive with current recommender approaches that simply implement a version of the ‘bag of frames’ approach (i.e., to use short pieces of acoustic information with no particular known relevance to perceived affect). Using chunks such as the five second windows indicated in the present study may allow more accurate and faster constructed individual-centered models that can be used in future personalized music recommender systems. In our future work, we plan to make such comparisons.</p>
<p>In conclusion, this study has demonstrated the occurrence, relevance, coherence and distinctiveness of perceptions of musical phrasing in sound-based music. It shows clear roles of acoustic intensity and a range of timbral features therein. The results will form the basis of future empirical studies designed with applications to contexts such as electroacoustic composition and music recommender systems.</p>
</sec>
</sec>
<sec sec-type="supplementary-material" id="sec022">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0167643.s001">
<label>S1 Data</label>
<caption>
<title>Complete dataset.</title>
<p>(XLSX)</p>
</caption>
<media xlink:href="pone.0167643.s001.xlsx">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0167643.s002">
<label>S1 Appendix</label>
<caption>
<title>Qualitative descriptions and designated categories of perceived phrase responses.</title>
<p>(PDF)</p>
</caption>
<media xlink:href="pone.0167643.s002.pdf">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>Supported by an Australian Research Council Linkage Project grant (LP150100487) held by the second author. We thank William Dunsmuir and Stephen McAdams for advice on experimental design and analysis, and the MARCS Institute Music Cognition and Action research group for helpful comments on an earlier draft.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0167643.ref001">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Knösche</surname>
<given-names>TR</given-names>
</name>
,
<name>
<surname>Neuhaus</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Haueisen</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Alter</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Maess</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Witte</surname>
<given-names>OW</given-names>
</name>
,
<etal>et al</etal>
<article-title>Perception of phrase structure in music</article-title>
.
<source>Human Brain Mapping</source>
.
<year>2005</year>
;
<volume>24</volume>
:
<fpage>259</fpage>
<lpage>73</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1002/hbm.20088">10.1002/hbm.20088</ext-link>
</comment>
<pub-id pub-id-type="pmid">15678484</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref002">
<label>2</label>
<mixed-citation publication-type="book">
<name>
<surname>Bregman</surname>
<given-names>AS</given-names>
</name>
.
<source>Auditory scene analysis: The perceptual organization of sound</source>
.
<edition>2nd ed</edition>
<publisher-loc>Cambridge, Mass.</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
;
<year>1999</year>
<volume>xiii</volume>
,
<fpage>773</fpage>
p.</mixed-citation>
</ref>
<ref id="pone.0167643.ref003">
<label>3</label>
<mixed-citation publication-type="book">
<name>
<surname>Cutler</surname>
<given-names>A</given-names>
</name>
.
<chapter-title>Exploiting prosodic probabilities in speech segmentation</chapter-title>
In:
<name>
<surname>Altmann</surname>
<given-names>GTM</given-names>
</name>
, editor.
<source>Cognitive models of speech processing: Psycholinguistic and computational perspectives</source>
.
<publisher-loc>London</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
;
<year>1990</year>
p.
<fpage>105</fpage>
<lpage>21</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref004">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Streeter</surname>
<given-names>LA</given-names>
</name>
.
<article-title>Acoustic determinants of phrase boundary perception</article-title>
.
<source>Journal of the Acoustical Society of America</source>
.
<year>1978</year>
;
<volume>64</volume>
:
<fpage>1582</fpage>
<lpage>92</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1121/1.382142">http://dx.doi.org/10.1121/1.382142</ext-link>
.
<pub-id pub-id-type="pmid">739094</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref005">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jusczyk</surname>
<given-names>PW</given-names>
</name>
,
<name>
<surname>Hirsh-Pasek</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Kemler Nelson</surname>
<given-names>DG</given-names>
</name>
,
<name>
<surname>Kennedy</surname>
<given-names>LJ</given-names>
</name>
,
<name>
<surname>Woodward</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Piwoz</surname>
<given-names>J</given-names>
</name>
.
<article-title>Perception of acoustic correlates of major phrasal units by young infants</article-title>
.
<source>Cognitive Psychology</source>
.
<year>1992</year>
;
<volume>24</volume>
:
<fpage>252</fpage>
<lpage>93</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/0010-0285(92)90009-Q">http://dx.doi.org/10.1016/0010-0285(92)90009-Q</ext-link>
.
<pub-id pub-id-type="pmid">1582173</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref006">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Palmer</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Krumhansl</surname>
<given-names>CL</given-names>
</name>
.
<article-title>Pitch and temporal contributions to musical phrase perception: Effects of harmony, performance timing, and familiarity</article-title>
.
<source>Perception & Psychophysics</source>
.
<year>1987</year>
;
<volume>41</volume>
:
<fpage>505</fpage>
<lpage>18</lpage>
.
<pub-id pub-id-type="pmid">3615147</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref007">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Palmer</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Krumhansl</surname>
<given-names>CL</given-names>
</name>
.
<article-title>Independent temporal and pitch structures in determination of musical phrases</article-title>
.
<source>Journal of Experimental Psychology: Human Perception and Performance</source>
.
<year>1987</year>
;
<volume>13</volume>
:
<fpage>116</fpage>
<lpage>26</lpage>
.
<pub-id pub-id-type="pmid">2951485</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref008">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Deutsch</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Feroe</surname>
<given-names>J</given-names>
</name>
.
<article-title>The internal representation of pitch sequences in tonal music</article-title>
.
<source>Psychological Review</source>
.
<year>1981</year>
;
<volume>88</volume>
:
<fpage>503</fpage>
<lpage>22</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref009">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stoffer</surname>
<given-names>TH</given-names>
</name>
.
<article-title>Representation of phrase structure in the perception of music</article-title>
.
<source>Music Perception: An Interdisciplinary Journal</source>
.
<year>1985</year>
;
<volume>3</volume>
:
<fpage>191</fpage>
<lpage>220</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref010">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Krumhansl</surname>
<given-names>CL</given-names>
</name>
,
<name>
<surname>Jusczyk</surname>
<given-names>PW</given-names>
</name>
.
<article-title>Infants' perception of phrase structure in music</article-title>
.
<source>Psychological Science</source>
.
<year>1990</year>
;
<volume>1</volume>
:
<fpage>70</fpage>
<lpage>3</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref011">
<label>11</label>
<mixed-citation publication-type="book">
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
.
<source>The Oxford handbook of computer music</source>
.
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
;
<year>2009</year>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref012">
<label>12</label>
<mixed-citation publication-type="book">
<name>
<surname>Landy</surname>
<given-names>L</given-names>
</name>
.
<chapter-title>Sound-based music 4 all</chapter-title>
In:
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
, editor.
<source>The Oxford handbook of computer music</source>
.
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
;
<year>2009</year>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref013">
<label>13</label>
<mixed-citation publication-type="book">
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
, editor.
<source>The Oxford Handbook of Computer Music</source>
.
<publisher-loc>New York, USA</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
;
<year>2009</year>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref014">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
.
<article-title>Influences of structure and agency on the perception of musical change</article-title>
.
<source>Psychomusicology: Music, Mind, and Brain</source>
.
<year>2014</year>
;
<volume>24</volume>
(
<issue>1</issue>
):
<fpage>103</fpage>
<lpage>8</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref015">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Clarke</surname>
<given-names>EF</given-names>
</name>
,
<name>
<surname>Krumhansl</surname>
<given-names>CL</given-names>
</name>
.
<article-title>Perceiving musical time</article-title>
.
<source>Music Perception: An Interdisciplinary Journal</source>
.
<year>1990</year>
;
<volume>7</volume>
:
<fpage>213</fpage>
<lpage>51</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref016">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
.
<article-title>Comparative time series analysis of perceptual responses to electroacoustic music</article-title>
.
<source>Music Perception</source>
.
<year>2012</year>
;
<volume>29</volume>
:
<fpage>359</fpage>
<lpage>75</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref017">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Balkwill</surname>
<given-names>L-L</given-names>
</name>
,
<name>
<surname>Thompson</surname>
<given-names>WF</given-names>
</name>
.
<article-title>A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues</article-title>
.
<source>Music Perception</source>
.
<year>1999</year>
;
<volume>17</volume>
:
<fpage>43</fpage>
<lpage>64</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref018">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Balkwill</surname>
<given-names>L-L</given-names>
</name>
,
<name>
<surname>Thompson</surname>
<given-names>WF</given-names>
</name>
,
<name>
<surname>Matsunaga</surname>
<given-names>R</given-names>
</name>
.
<article-title>Recognition of emotion in Japanese, Western, and Hindustani music by Japanese listeners</article-title>
.
<source>Japanese Psychological Research</source>
.
<year>2004</year>
;
<volume>46</volume>
:
<fpage>337</fpage>
<lpage>49</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref019">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
.
<article-title>Time series analysis as a method to examine acoustical influences on real-time perception of music</article-title>
.
<source>Empirical Musicology Review</source>
.
<year>2010</year>
;
<volume>5</volume>
:
<fpage>152</fpage>
<lpage>75</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref020">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
.
<article-title>Modelling perception of structure and affect in music: Spectral centroid and Wishart’s Red Bird</article-title>
.
<source>Empirical Musicology Review</source>
.
<year>2011</year>
;
<volume>6</volume>
:
<fpage>131</fpage>
<lpage>7</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref021">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Olsen</surname>
<given-names>KN</given-names>
</name>
,
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Stevens</surname>
<given-names>CJ</given-names>
</name>
.
<article-title>A continuous measure of musical engagement contributes to prediction of perceived arousal and valence</article-title>
.
<source>Psychomusicology: Music, Mind, and Brain</source>
.
<year>2014</year>
;
<volume>24</volume>
:
<fpage>147</fpage>
<lpage>56</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref022">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Olsen</surname>
<given-names>KN</given-names>
</name>
,
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Stevens</surname>
<given-names>CJ</given-names>
</name>
,
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
.
<article-title>Both acoustic intensity and loudness contribute to time-series models of perceived affect in response to music</article-title>
.
<source>Psychomusicology: Music, Mind, and Brain</source>
.
<year>2015</year>
;
<volume>25</volume>
:
<fpage>124</fpage>
<lpage>37</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref023">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bregman</surname>
<given-names>AS</given-names>
</name>
,
<name>
<surname>Dannenbring</surname>
<given-names>GL</given-names>
</name>
.
<article-title>The effect of continuity on auditory stream segregation</article-title>
.
<source>Perception & Psychophysics</source>
.
<year>1973</year>
;
<volume>13</volume>
(
<issue>2</issue>
):
<fpage>308</fpage>
<lpage>12</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref024">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Iverson</surname>
<given-names>P</given-names>
</name>
.
<article-title>Auditory stream segregation by musical timbre: effects of static and dynamic acoustic attributes</article-title>
.
<source>Journal of Experimental Psychology: Human Perception and Performance</source>
.
<year>1995</year>
;
<volume>21</volume>
(
<issue>4</issue>
):
<fpage>751</fpage>
<pub-id pub-id-type="pmid">7643047</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref025">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sandell</surname>
<given-names>GJ</given-names>
</name>
.
<article-title>Macrotimbre: Contribution of attack, steady state, and verbal attributes</article-title>
.
<source>The Journal of the Acoustical Society of America</source>
.
<year>1998</year>
;
<volume>103</volume>
(
<issue>5</issue>
):
<fpage>2966</fpage>
-.</mixed-citation>
</ref>
<ref id="pone.0167643.ref026">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
.
<article-title>Listener detection of segmentation in computer-generated sound: An exploratory experimental study</article-title>
.
<source>Journal of New Music Research</source>
.
<year>2007</year>
;
<volume>36</volume>
:
<fpage>83</fpage>
<lpage>93</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref027">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
.
<article-title>Facilitation and coherence between the dynamic and retrospective perception of segmentation in computer-generated music</article-title>
.
<source>Empirical Musicology Review</source>
.
<year>2007</year>
;
<volume>2</volume>
:
<fpage>74</fpage>
<lpage>80</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref028">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
.
<article-title>Listeners discern affective variation in computer-generated musical sounds</article-title>
.
<source>Perception</source>
.
<year>2009</year>
;
<volume>38</volume>
:
<fpage>1386</fpage>
<lpage>404</lpage>
.
<pub-id pub-id-type="pmid">19911635</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref029">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Olsen</surname>
<given-names>KN</given-names>
</name>
.
<article-title>Intensity dynamics and loudness change: A review of methods and perceptual processes</article-title>
.
<source>Acoustics Australia</source>
.
<year>2014</year>
;
<volume>42</volume>
:
<fpage>159</fpage>
<lpage>65</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref030">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Olsen</surname>
<given-names>KN</given-names>
</name>
,
<name>
<surname>Stevens</surname>
<given-names>CJ</given-names>
</name>
.
<article-title>Forward masking of dynamic acoustic intensity: Effects of intensity region and end-level</article-title>
.
<source>Perception</source>
.
<year>2012</year>
;
<volume>41</volume>
:
<fpage>594</fpage>
<lpage>605</lpage>
.
<pub-id pub-id-type="pmid">23025162</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref031">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jesteadt</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Bacon</surname>
<given-names>SP</given-names>
</name>
,
<name>
<surname>Lehman</surname>
<given-names>JR</given-names>
</name>
.
<article-title>Forward masking as a function of frequency, masker level, and signal delay</article-title>
.
<source>Journal of the Acoustical Society of America</source>
.
<year>1982</year>
;
<volume>71</volume>
:
<fpage>950</fpage>
<lpage>62</lpage>
.
<pub-id pub-id-type="pmid">7085983</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref032">
<label>32</label>
<mixed-citation publication-type="other">Mendoza Garay JI. Self-report measurement of segmentation, mimesis and perceived emotions in acousmatic electroacoustic music [Masters Thesis]: University of Jyvaskyla; 2014.</mixed-citation>
</ref>
<ref id="pone.0167643.ref033">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Klien</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Grill</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Flexer</surname>
<given-names>A</given-names>
</name>
.
<article-title>On automated annotation of acousmatic music</article-title>
.
<source>Journal of New Music Research</source>
.
<year>2012</year>
;
<volume>41</volume>
(
<issue>2</issue>
):
<fpage>153</fpage>
<lpage>73</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref034">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Siedenburg</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Fujinaga</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>McAdams</surname>
<given-names>S</given-names>
</name>
.
<article-title>A Comparison of Approaches to Timbre Descriptors in Music Information Retrieval and Music Psychology</article-title>
.
<source>Journal of New Music Research</source>
.
<year>2016</year>
;
<volume>45</volume>
(
<issue>1</issue>
):
<fpage>27</fpage>
<lpage>41</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref035">
<label>35</label>
<mixed-citation publication-type="book">
<name>
<surname>Cutler</surname>
<given-names>A</given-names>
</name>
.
<source>Native listening: Language experience and the recognition of spoken words</source>
:
<publisher-name>Mit Press</publisher-name>
;
<year>2012</year>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref036">
<label>36</label>
<mixed-citation publication-type="book">
<name>
<surname>Patel</surname>
<given-names>AD</given-names>
</name>
.
<source>Music, Language, and the Brain</source>
.
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
;
<year>2007</year>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref037">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kuhl</surname>
<given-names>PK</given-names>
</name>
.
<article-title>Early language acquisition: cracking the speech code</article-title>
.
<source>Nature reviews neuroscience</source>
.
<year>2004</year>
;
<volume>5</volume>
(
<issue>11</issue>
):
<fpage>831</fpage>
<lpage>43</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nrn1533">10.1038/nrn1533</ext-link>
</comment>
<pub-id pub-id-type="pmid">15496861</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref038">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>Zhao</surname>
<given-names>TC</given-names>
</name>
,
<name>
<surname>Kuhl</surname>
<given-names>PK</given-names>
</name>
.
<article-title>Effect of musical experience on learning lexical tone categories</article-title>
.
<source>The Journal of the Acoustical Society of America</source>
.
<year>2015</year>
;
<volume>137</volume>
(
<issue>3</issue>
):
<fpage>1452</fpage>
<lpage>63</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1121/1.4913457">10.1121/1.4913457</ext-link>
</comment>
<pub-id pub-id-type="pmid">25786956</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref039">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Peña</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Bonatti</surname>
<given-names>LL</given-names>
</name>
,
<name>
<surname>Nespor</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Mehler</surname>
<given-names>J</given-names>
</name>
.
<article-title>Signal-driven computations in speech processing</article-title>
.
<source>Science</source>
.
<year>2002</year>
;
<volume>298</volume>
(
<issue>5593</issue>
):
<fpage>604</fpage>
<lpage>7</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1126/science.1072901">10.1126/science.1072901</ext-link>
</comment>
<pub-id pub-id-type="pmid">12202684</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref040">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Johnson</surname>
<given-names>EK</given-names>
</name>
.
<article-title>Constructing a Proto-Lexicon: An Integrative View of Infant Language Development</article-title>
.
<source>Annual Review of Linguistics</source>
.
<year>2016</year>
;
<volume>2</volume>
:
<fpage>391</fpage>
<lpage>412</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref041">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Seidl</surname>
<given-names>A</given-names>
</name>
.
<article-title>Infants’ use and weighting of prosodic cues in clause segmentation</article-title>
.
<source>Journal of Memory and Language</source>
.
<year>2007</year>
;
<volume>57</volume>
(
<issue>1</issue>
):
<fpage>24</fpage>
<lpage>48</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref042">
<label>42</label>
<mixed-citation publication-type="other">Abel AK, Hunter D, Smith LS, editors. A biologically inspired onset and offset speech segmentation approach. Neural Networks (IJCNN), 2015 International Joint Conference on; 2015: IEEE.</mixed-citation>
</ref>
<ref id="pone.0167643.ref043">
<label>43</label>
<mixed-citation publication-type="book">
<name>
<surname>de Cheveigné</surname>
<given-names>A</given-names>
</name>
.
<chapter-title>Pitch perception models</chapter-title>
In:
<name>
<surname>Plack</surname>
<given-names>CJ</given-names>
</name>
,
<name>
<surname>Fay</surname>
<given-names>RR</given-names>
</name>
,
<name>
<surname>Oxenham</surname>
<given-names>AJ</given-names>
</name>
,
<name>
<surname>Popper</surname>
<given-names>AN</given-names>
</name>
, editors.
<source>Pitch Neural Coding and Perception, Springer Handbook of Auditory Research Vol 24</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Springer</publisher-name>
;
<year>2005</year>
p.
<fpage>169</fpage>
<lpage>233</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref044">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Saffran</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>Aslin</surname>
<given-names>RN</given-names>
</name>
,
<name>
<surname>Newport</surname>
<given-names>EL</given-names>
</name>
.
<article-title>Statistical learning by 8-month-old infants</article-title>
.
<source>Science</source>
.
<year>1996</year>
;
<volume>274</volume>
(
<issue>5294</issue>
):
<fpage>1926</fpage>
<lpage>8</lpage>
.
<pub-id pub-id-type="pmid">8943209</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref045">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Frost</surname>
<given-names>RL</given-names>
</name>
,
<name>
<surname>Monaghan</surname>
<given-names>P</given-names>
</name>
.
<article-title>Simultaneous segmentation and generalisation of non-adjacent dependencies from continuous speech</article-title>
.
<source>Cognition</source>
.
<year>2016</year>
;
<volume>147</volume>
:
<fpage>70</fpage>
<lpage>4</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.cognition.2015.11.010">10.1016/j.cognition.2015.11.010</ext-link>
</comment>
<pub-id pub-id-type="pmid">26638049</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref046">
<label>46</label>
<mixed-citation publication-type="journal">
<name>
<surname>McNealy</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Mazziotta</surname>
<given-names>JC</given-names>
</name>
,
<name>
<surname>Dapretto</surname>
<given-names>M</given-names>
</name>
.
<article-title>Cracking the language code: Neural mechanisms underlying speech parsing</article-title>
.
<source>The Journal of neuroscience</source>
.
<year>2006</year>
;
<volume>26</volume>
(
<issue>29</issue>
):
<fpage>7629</fpage>
<lpage>39</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.5501-05.2006">10.1523/JNEUROSCI.5501-05.2006</ext-link>
</comment>
<pub-id pub-id-type="pmid">16855090</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref047">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Palmer</surname>
<given-names>SD</given-names>
</name>
,
<name>
<surname>Mattys</surname>
<given-names>SL</given-names>
</name>
.
<article-title>Speech segmentation by statistical learning is supported by domain-general processes within working memory</article-title>
.
<source>The Quarterly Journal of Experimental Psychology</source>
.
<year>2016</year>
:
<fpage>1</fpage>
<lpage>12</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref048">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hawthorne</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Rudat</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Gerken</surname>
<given-names>L</given-names>
</name>
.
<article-title>Prosody and the Acquisition of Hierarchical Structure in Toddlers and Adults</article-title>
.
<source>Infancy</source>
.
<year>2016</year>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref049">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Männel</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Friederici</surname>
<given-names>AD</given-names>
</name>
.
<article-title>Neural correlates of prosodic boundary perception in German preschoolers: If pause is present, pitch can go</article-title>
.
<source>Brain research</source>
.
<year>2016</year>
;
<volume>1632</volume>
:
<fpage>27</fpage>
<lpage>33</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.brainres.2015.12.009">10.1016/j.brainres.2015.12.009</ext-link>
</comment>
<pub-id pub-id-type="pmid">26683081</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref050">
<label>50</label>
<mixed-citation publication-type="journal">
<name>
<surname>Seidl</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Johnson</surname>
<given-names>EK</given-names>
</name>
.
<article-title>Infant word segmentation revisited: Edge alignment facilitates target extraction</article-title>
.
<source>Developmental science</source>
.
<year>2006</year>
;
<volume>9</volume>
(
<issue>6</issue>
):
<fpage>565</fpage>
<lpage>73</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1467-7687.2006.00534.x">10.1111/j.1467-7687.2006.00534.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">17059453</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref051">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>McAdams</surname>
<given-names>S</given-names>
</name>
.
<article-title>Perspectives on the contribution of timbre to musical structure</article-title>
.
<source>Computer Music Journal</source>
.
<year>1999</year>
;
<volume>23</volume>
:
<fpage>85</fpage>
<lpage>102</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref052">
<label>52</label>
<mixed-citation publication-type="journal">
<name>
<surname>McAdams</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Winsberg</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Donnadieu</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>De Soete</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Krimphoff</surname>
<given-names>J</given-names>
</name>
.
<article-title>Perceptual scaling of synthesized musical timbres: Common dimensions, specificities, and latent subject classes</article-title>
.
<source>Psychological Research</source>
.
<year>1995</year>
;
<volume>58</volume>
:
<fpage>177</fpage>
<lpage>92</lpage>
.
<pub-id pub-id-type="pmid">8570786</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref053">
<label>53</label>
<mixed-citation publication-type="journal">
<name>
<surname>Peeters</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Giordano</surname>
<given-names>BL</given-names>
</name>
,
<name>
<surname>Susini</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Misdariis</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>McAdams</surname>
<given-names>S</given-names>
</name>
.
<article-title>The timbre toolbox: Extracting audio descriptors from musical signals</article-title>
.
<source>Journal of the Acoustical Society of America</source>
.
<year>2011</year>
;
<volume>130</volume>
:
<fpage>2902</fpage>
<lpage>16</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1121/1.3642604">10.1121/1.3642604</ext-link>
</comment>
<pub-id pub-id-type="pmid">22087919</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref054">
<label>54</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Schubert</surname>
<given-names>E</given-names>
</name>
.
<article-title>Acoustic intensity causes perceived changes in arousal levels in music: An experimental investigation</article-title>
.
<source>PloS One</source>
.
<year>2011</year>
;
<volume>6</volume>
:
<fpage>e18591</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0018591">10.1371/journal.pone.0018591</ext-link>
</comment>
<pub-id pub-id-type="pmid">21533095</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref055">
<label>55</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jahnke</surname>
<given-names>JC</given-names>
</name>
.
<article-title>Serial position effects in immediate serial recall</article-title>
.
<source>Journal of Verbal Learning and Verbal Behavior</source>
.
<year>1963</year>
;
<volume>2</volume>
:
<fpage>284</fpage>
<lpage>7</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref056">
<label>56</label>
<mixed-citation publication-type="book">
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
.
<source>Hyperimprovisation: Computer-interactive sound improvisation</source>
.
<publisher-loc>Middleton, Wisconsin</publisher-loc>
:
<publisher-name>AR Editions, Inc.</publisher-name>
;
<year>2003</year>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref057">
<label>57</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bao</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Dai</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Wang</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Chuang</surname>
<given-names>S-K</given-names>
</name>
.
<article-title>A joint modelling approach for clustered recurrent events and death events</article-title>
.
<source>Journal of Applied Statistics</source>
.
<year>2013</year>
;
<volume>40</volume>
(
<issue>1</issue>
):
<fpage>123</fpage>
<lpage>40</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref058">
<label>58</label>
<mixed-citation publication-type="book">
<name>
<surname>Cox</surname>
<given-names>DR</given-names>
</name>
,
<name>
<surname>Oakes</surname>
<given-names>D</given-names>
</name>
.
<source>Analysis of survival data</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Chapman and Hall</publisher-name>
;
<year>1984</year>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref059">
<label>59</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bosch</surname>
<given-names>Lt</given-names>
</name>
,
<name>
<surname>Oostdijk</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Boves</surname>
<given-names>L</given-names>
</name>
.
<article-title>On temporal aspects of turn taking in conversational dialogues</article-title>
.
<source>Speech Communication</source>
.
<year>2005</year>
;
<volume>47</volume>
:
<fpage>80</fpage>
<lpage>6</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref060">
<label>60</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gingras</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Pearce</surname>
<given-names>MT</given-names>
</name>
,
<name>
<surname>Goodchild</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Wiggins</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>McAdams</surname>
<given-names>S</given-names>
</name>
.
<article-title>Linking melodic expectation to expressive performance timing and perceived musical tension</article-title>
.
<source>Journal of Experimental Psychology: Human Perception and Performance</source>
.
<year>2016</year>
;
<volume>42</volume>
:
<fpage>594</fpage>
<lpage>609</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1037/xhp0000141">10.1037/xhp0000141</ext-link>
</comment>
<pub-id pub-id-type="pmid">26594881</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref061">
<label>61</label>
<mixed-citation publication-type="other">Dean RT, Bailes F, editors. Event and process in the fabric and perception of Electroacoustic music. Proceedings of the International Symposium: Xenakis La musique électroacoustique / Xenakis The electroacoustic music (pp 1–12); 2012; Paris: Centre de Documentation Musique Contemporaine, Intervention.</mixed-citation>
</ref>
<ref id="pone.0167643.ref062">
<label>62</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gaver</surname>
<given-names>WW</given-names>
</name>
.
<article-title>What in the world do we hear?: An ecological approach to auditory event perception</article-title>
.
<source>Ecological Psychology</source>
.
<year>1993</year>
;
<volume>5</volume>
(
<issue>1</issue>
):
<fpage>1</fpage>
<lpage>29</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref063">
<label>63</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gaver</surname>
<given-names>WW</given-names>
</name>
.
<article-title>How do we hear in the world? Explorations in ecological acoustics</article-title>
.
<source>Ecological Psychology</source>
.
<year>1993</year>
;
<volume>5</volume>
(
<issue>4</issue>
):
<fpage>285</fpage>
<lpage>313</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref064">
<label>64</label>
<mixed-citation publication-type="book">
<name>
<surname>Neuhoff</surname>
<given-names>JG</given-names>
</name>
, editor.
<source>Ecological psychoacoustics</source>
.
<publisher-loc>San Diego</publisher-loc>
:
<publisher-name>Elsevier</publisher-name>
;
<year>2004</year>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref065">
<label>65</label>
<mixed-citation publication-type="journal">
<name>
<surname>Piantadosi</surname>
<given-names>ST</given-names>
</name>
,
<name>
<surname>Tily</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Gibson</surname>
<given-names>E</given-names>
</name>
.
<article-title>Word lengths are optimized for efficient communication</article-title>
.
<source>Proceedings of the National Academy of Sciences</source>
.
<year>2011</year>
;
<volume>108</volume>
(
<issue>9</issue>
):
<fpage>3526</fpage>
<lpage>9</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref066">
<label>66</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
.
<article-title>Using time series analysis to evaluate skin conductance during movement in piano improvisation</article-title>
.
<source>Psychology of Music</source>
.
<year>2015</year>
;
<volume>43</volume>
:
<fpage>3</fpage>
<lpage>23</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref067">
<label>67</label>
<mixed-citation publication-type="other">Dean RT, Bailes F. Event and process in the fabric and perception of electroacoustic music Proceedings of the international Symposium: Xenakis The electroacoustic music (
<ext-link ext-link-type="uri" xlink:href="http://wwwcdmcassofr/sites/default/files/texte/pdf/rencontres/intervention11_xenakis_electroacoustiquepdf">http://wwwcdmcassofr/sites/default/files/texte/pdf/rencontres/intervention11_xenakis_electroacoustiquepdf</ext-link>
): Centre de Documentation Musique Contemporaine; 2013. p. Intervention11, pp.2.</mixed-citation>
</ref>
<ref id="pone.0167643.ref068">
<label>68</label>
<mixed-citation publication-type="other">Minard A, Susini P, Misdariis N, Lemaitre G, McAdams S, Parizet E, editors. Environmental sound description: comparison and generalization of 4 timbre studies. Proc CHI Workshop on Sonic Interaction Design; 2008.</mixed-citation>
</ref>
<ref id="pone.0167643.ref069">
<label>69</label>
<mixed-citation publication-type="journal">
<name>
<surname>Misdariis</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Minard</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Susini</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Lemaitre</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>McAdams</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Parizet</surname>
<given-names>E</given-names>
</name>
.
<article-title>Environmental sound perception: Metadescription and modeling based on independent primary studies</article-title>
.
<source>EURASIP Journal on Audio, Speech, and Music Processing</source>
.
<year>2010</year>
;
<volume>2010</volume>
:
<fpage>362013</fpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref070">
<label>70</label>
<mixed-citation publication-type="book">
<name>
<surname>Landy</surname>
<given-names>L</given-names>
</name>
.
<chapter-title>Sound-based music 4 all</chapter-title>
In:
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
, editor.
<source>The Oxford Handbook of Computer Music</source>
.
<publisher-loc>New York, USA</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
;
<year>2009</year>
p.
<fpage>518</fpage>
<lpage>35</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref071">
<label>71</label>
<mixed-citation publication-type="journal">
<name>
<surname>Galantucci</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Fowler</surname>
<given-names>CA</given-names>
</name>
,
<name>
<surname>Turvey</surname>
<given-names>MT</given-names>
</name>
.
<article-title>The motor theory of speech perception reviewed</article-title>
.
<source>Psychonomic bulletin & review</source>
.
<year>2006</year>
;
<volume>13</volume>
(
<issue>3</issue>
):
<fpage>361</fpage>
<lpage>77</lpage>
.
<pub-id pub-id-type="pmid">17048719</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref072">
<label>72</label>
<mixed-citation publication-type="journal">
<name>
<surname>Massaro</surname>
<given-names>DW</given-names>
</name>
,
<name>
<surname>Chen</surname>
<given-names>TH</given-names>
</name>
.
<article-title>The motor theory of speech perception revisited</article-title>
.
<source>Psychonomic bulletin & review</source>
.
<year>2008</year>
;
<volume>15</volume>
(
<issue>2</issue>
):
<fpage>453</fpage>
<lpage>7</lpage>
.
<pub-id pub-id-type="pmid">18488668</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref073">
<label>73</label>
<mixed-citation publication-type="journal">
<name>
<surname>Godøy</surname>
<given-names>RI</given-names>
</name>
.
<article-title>Motor-mimetic music cognition</article-title>
.
<source>Leonardo</source>
.
<year>2003</year>
;
<volume>36</volume>
(
<issue>4</issue>
):
<fpage>317</fpage>
<lpage>9</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref074">
<label>74</label>
<mixed-citation publication-type="journal">
<name>
<surname>Godøy</surname>
<given-names>RI</given-names>
</name>
,
<name>
<surname>Jensenius</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Nymoen</surname>
<given-names>K</given-names>
</name>
.
<article-title>Chunking in music by coarticulation</article-title>
.
<source>Acta Acustica united with Acustica</source>
.
<year>2010</year>
;
<volume>96</volume>
(
<issue>4</issue>
):
<fpage>690</fpage>
<lpage>700</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref075">
<label>75</label>
<mixed-citation publication-type="journal">
<name>
<surname>Grossberg</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Myers</surname>
<given-names>CW</given-names>
</name>
.
<article-title>The resonant dynamics of speech perception: interword integration and duration-dependent backward effects</article-title>
.
<source>Psychological review</source>
.
<year>2000</year>
;
<volume>107</volume>
(
<issue>4</issue>
):
<fpage>735</fpage>
<pub-id pub-id-type="pmid">11089405</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref076">
<label>76</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Pearce</surname>
<given-names>MT</given-names>
</name>
.
<article-title>Music Cognition as Mental Time Travel</article-title>
.
<source>Scientific reports</source>
.
<year>2013</year>
;
<volume>3</volume>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref077">
<label>77</label>
<mixed-citation publication-type="journal">
<name>
<surname>Godøy</surname>
<given-names>RI</given-names>
</name>
.
<article-title>Gestural-Sonorous Objects: embodied extensions of Schaeffer's conceptual apparatus</article-title>
.
<source>Organised Sound</source>
.
<year>2006</year>
;
<volume>11</volume>
(
<issue>02</issue>
):
<fpage>149</fpage>
<lpage>57</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref078">
<label>78</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dean</surname>
<given-names>RT</given-names>
</name>
,
<name>
<surname>Bailes</surname>
<given-names>F</given-names>
</name>
.
<article-title>Modeling perceptions of valence in diverse music: Roles of acoustic features, agency and individual variation</article-title>
.
<source>Music Perception</source>
.
<year>2016</year>
;
<volume>34</volume>
:
<fpage>104</fpage>
<lpage>17</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref079">
<label>79</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hutchison</surname>
<given-names>JL</given-names>
</name>
,
<name>
<surname>Hubbard</surname>
<given-names>TL</given-names>
</name>
,
<name>
<surname>Hubbard</surname>
<given-names>NA</given-names>
</name>
,
<name>
<surname>Brigante</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Rypma</surname>
<given-names>B</given-names>
</name>
.
<article-title>Minding the gap: An experimental assessment of musical segmentation models</article-title>
.
<source>Psychomusicology: Music, Mind, and Brain</source>
.
<year>2015</year>
;
<volume>25</volume>
(
<issue>2</issue>
):
<fpage>103</fpage>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref080">
<label>80</label>
<mixed-citation publication-type="journal">
<name>
<surname>Cunillera</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Laine</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Antoni</surname>
<given-names>R-F</given-names>
</name>
.
<article-title>Headstart for speech segmentation: a neural signature for the anchor word effect</article-title>
.
<source>Neuropsychologia</source>
.
<year>2016</year>
.</mixed-citation>
</ref>
<ref id="pone.0167643.ref081">
<label>81</label>
<mixed-citation publication-type="journal">
<name>
<surname>Caclin</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>McAdams</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Smith</surname>
<given-names>BK</given-names>
</name>
,
<name>
<surname>Winsberg</surname>
<given-names>S</given-names>
</name>
.
<article-title>Acoustic correlates of timbre space dimensions: A confirmatory study using synthetic tonesa)</article-title>
.
<source>The Journal of the Acoustical Society of America</source>
.
<year>2005</year>
;
<volume>118</volume>
(
<issue>1</issue>
):
<fpage>471</fpage>
<lpage>82</lpage>
.
<pub-id pub-id-type="pmid">16119366</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0167643.ref082">
<label>82</label>
<mixed-citation publication-type="journal">
<name>
<surname>Olsen</surname>
<given-names>KN</given-names>
</name>
,
<name>
<surname>Stevens</surname>
<given-names>CJ</given-names>
</name>
.
<article-title>Psychophysiological response to acoustic intensity change in a musical chord</article-title>
.
<source>Journal of Psychophysiology</source>
.
<year>2013</year>
;
<volume>27</volume>
:
<fpage>16</fpage>
<lpage>26</lpage>
.</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/XenakisV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000011 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000011 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    XenakisV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:5172564
   |texte=   What Constitutes a Phrase in Sound-Based Music? A Mixed-Methods Investigation of Perception and Acoustics
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:27997625" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a XenakisV1 

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Thu Nov 8 16:12:13 2018. Site generation: Wed Mar 6 22:10:31 2024