Serveur d'exploration sur l'opéra

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity

Identifieur interne : 000026 ( Pmc/Checkpoint ); précédent : 000025; suivant : 000027

The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity

Auteurs : C. A. Alice Mado Proverbio [Italie] ; Valentina Lozano Nasi [Italie] ; Laura Alessandra Arcari [Italie] ; Francesco De Benedetto [Italie] ; Matteo Guardamagna [Italie] ; Martina Gazzola [Italie] ; Alberto Zani

Source :

RBID : PMC:4606564

Abstract

The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding.


Url:
DOI: 10.1038/srep15219
PubMed: 26469712
PubMed Central: 4606564


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4606564

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity</title>
<author>
<name sortKey="Mado Proverbio, C A Alice" sort="Mado Proverbio, C A Alice" uniqKey="Mado Proverbio C" first="C. A. Alice" last="Mado Proverbio">C. A. Alice Mado Proverbio</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Lozano Nasi, Valentina" sort="Lozano Nasi, Valentina" uniqKey="Lozano Nasi V" first="Valentina" last="Lozano Nasi">Valentina Lozano Nasi</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Alessandra Arcari, Laura" sort="Alessandra Arcari, Laura" uniqKey="Alessandra Arcari L" first="Laura" last="Alessandra Arcari">Laura Alessandra Arcari</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="De Benedetto, Francesco" sort="De Benedetto, Francesco" uniqKey="De Benedetto F" first="Francesco" last="De Benedetto">Francesco De Benedetto</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Guardamagna, Matteo" sort="Guardamagna, Matteo" uniqKey="Guardamagna M" first="Matteo" last="Guardamagna">Matteo Guardamagna</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Gazzola, Martina" sort="Gazzola, Martina" uniqKey="Gazzola M" first="Martina" last="Gazzola">Martina Gazzola</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Zani, Alberto" sort="Zani, Alberto" uniqKey="Zani A" first="Alberto" last="Zani">Alberto Zani</name>
<affiliation>
<nlm:aff id="a2">
<institution>IBFM-CNR, Via Fratelli Cervi</institution>
,
<country>Milan,</country>
20090,
<country>Italy</country>
</nlm:aff>
<wicri:noCountry code="nlm country">,</wicri:noCountry>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26469712</idno>
<idno type="pmc">4606564</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4606564</idno>
<idno type="RBID">PMC:4606564</idno>
<idno type="doi">10.1038/srep15219</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000072</idno>
<idno type="wicri:Area/Pmc/Curation">000072</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000026</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity</title>
<author>
<name sortKey="Mado Proverbio, C A Alice" sort="Mado Proverbio, C A Alice" uniqKey="Mado Proverbio C" first="C. A. Alice" last="Mado Proverbio">C. A. Alice Mado Proverbio</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Lozano Nasi, Valentina" sort="Lozano Nasi, Valentina" uniqKey="Lozano Nasi V" first="Valentina" last="Lozano Nasi">Valentina Lozano Nasi</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Alessandra Arcari, Laura" sort="Alessandra Arcari, Laura" uniqKey="Alessandra Arcari L" first="Laura" last="Alessandra Arcari">Laura Alessandra Arcari</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="De Benedetto, Francesco" sort="De Benedetto, Francesco" uniqKey="De Benedetto F" first="Francesco" last="De Benedetto">Francesco De Benedetto</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Guardamagna, Matteo" sort="Guardamagna, Matteo" uniqKey="Guardamagna M" first="Matteo" last="Guardamagna">Matteo Guardamagna</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Gazzola, Martina" sort="Gazzola, Martina" uniqKey="Gazzola M" first="Martina" last="Gazzola">Martina Gazzola</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Zani, Alberto" sort="Zani, Alberto" uniqKey="Zani A" first="Alberto" last="Zani">Alberto Zani</name>
<affiliation>
<nlm:aff id="a2">
<institution>IBFM-CNR, Via Fratelli Cervi</institution>
,
<country>Milan,</country>
20090,
<country>Italy</country>
</nlm:aff>
<wicri:noCountry code="nlm country">,</wicri:noCountry>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Scientific Reports</title>
<idno type="e-ISSN">2045-2322</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="K Mpfe, J" uniqKey="K Mpfe J">J. Kämpfe</name>
</author>
<author>
<name sortKey="Sedlmeier, P" uniqKey="Sedlmeier P">P. Sedlmeier</name>
</author>
<author>
<name sortKey="Renkewitz, F" uniqKey="Renkewitz F">F. Renkewitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dalton, B H" uniqKey="Dalton B">B. H. Dalton</name>
</author>
<author>
<name sortKey="Behm, D G" uniqKey="Behm D">D. G. Behm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kahneman, D" uniqKey="Kahneman D">D. Kahneman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Furnham, A" uniqKey="Furnham A">A. Furnham</name>
</author>
<author>
<name sortKey="Allass, K" uniqKey="Allass K">K. Allass</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, W F" uniqKey="Thompson W">W. F. Thompson</name>
</author>
<author>
<name sortKey="Schellenberg, E G" uniqKey="Schellenberg E">E. G. Schellenberg</name>
</author>
<author>
<name sortKey="Husain, G" uniqKey="Husain G">G. Husain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cassidy, G" uniqKey="Cassidy G">G. Cassidy</name>
</author>
<author>
<name sortKey="Macdonald, R A R" uniqKey="Macdonald R">R. A. R. MacDonald</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ferreri, L" uniqKey="Ferreri L">L. Ferreri</name>
</author>
<author>
<name sortKey="Aucouturier, J J" uniqKey="Aucouturier J">J. J. Aucouturier</name>
</author>
<author>
<name sortKey="Muthalib, M" uniqKey="Muthalib M">M. Muthalib</name>
</author>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E. Bigand</name>
</author>
<author>
<name sortKey="Bugaiska, A" uniqKey="Bugaiska A">A. Bugaiska</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="El Haj, M" uniqKey="El Haj M">M. El Haj</name>
</author>
<author>
<name sortKey="Postal, V" uniqKey="Postal V">V. Postal</name>
</author>
<author>
<name sortKey="Allain, P" uniqKey="Allain P">P. Allain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Angel, L A" uniqKey="Angel L">L. A. Angel</name>
</author>
<author>
<name sortKey="Polzella, D J" uniqKey="Polzella D">D. J. Polzella</name>
</author>
<author>
<name sortKey="Elvers, G C" uniqKey="Elvers G">G. C. Elvers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hallam, S" uniqKey="Hallam S">S. Hallam</name>
</author>
<author>
<name sortKey="Price, J" uniqKey="Price J">J. Price</name>
</author>
<author>
<name sortKey="Katsarou, G" uniqKey="Katsarou G">G. Katsarou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, B" uniqKey="Liu B">B. Liu</name>
</author>
<author>
<name sortKey="Huang, Y" uniqKey="Huang Y">Y. Huang</name>
</author>
<author>
<name sortKey="Wang, Z" uniqKey="Wang Z">Z. Wang</name>
</author>
<author>
<name sortKey="Wu, G" uniqKey="Wu G">G. Wu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kang, H J" uniqKey="Kang H">H. J. Kang</name>
</author>
<author>
<name sortKey="Williamson, J W" uniqKey="Williamson J">J. W. Williamson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oakes, S" uniqKey="Oakes S">S. Oakes</name>
</author>
<author>
<name sortKey="North, A C" uniqKey="North A">A. C. North</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Woo, E W" uniqKey="Woo E">E. W. Woo</name>
</author>
<author>
<name sortKey="Kanachi, M" uniqKey="Kanachi M">M. Kanachi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Iwanaga, M" uniqKey="Iwanaga M">M. Iwanaga</name>
</author>
<author>
<name sortKey="Ito, T" uniqKey="Ito T">T. Ito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Furnham, A" uniqKey="Furnham A">A. Furnham</name>
</author>
<author>
<name sortKey="Allass, K" uniqKey="Allass K">K. Allass</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bloor, A" uniqKey="Bloor A">A. Bloor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Avila, C" uniqKey="Avila C">C. Avila</name>
</author>
<author>
<name sortKey="Furnham, A" uniqKey="Furnham A">A. Furnham</name>
</author>
<author>
<name sortKey="Mcclelland, A" uniqKey="Mcclelland A">A. McClelland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, L K" uniqKey="Miller L">L. K. Miller</name>
</author>
<author>
<name sortKey="Schyb, M" uniqKey="Schyb M">M. Schyb</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moreno, R" uniqKey="Moreno R">R. Moreno</name>
</author>
<author>
<name sortKey="Mayer, R E" uniqKey="Mayer R">R. E. Mayer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miskovic, D" uniqKey="Miskovic D">D. Miskovic</name>
</author>
<author>
<name sortKey="Rosenthal, R" uniqKey="Rosenthal R">R. Rosenthal</name>
</author>
<author>
<name sortKey="Zingg, U" uniqKey="Zingg U">U. Zingg</name>
</author>
<author>
<name sortKey="Oertli, D" uniqKey="Oertli D">D. Oertli</name>
</author>
<author>
<name sortKey="Metzger, U" uniqKey="Metzger U">U. Metzger</name>
</author>
<author>
<name sortKey="Jancke, L" uniqKey="Jancke L">L. Jancke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kallinen, K" uniqKey="Kallinen K">K. Kallinen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thompson, W F" uniqKey="Thompson W">W. F. Thompson</name>
</author>
<author>
<name sortKey="Schellenberg, E G" uniqKey="Schellenberg E">E. G. Schellenberg</name>
</author>
<author>
<name sortKey="Letnic, A K" uniqKey="Letnic A">A. K. Letnic</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Madsen, C K" uniqKey="Madsen C">C. K. Madsen</name>
</author>
<author>
<name sortKey="Madsen, C K" uniqKey="Madsen C">C. K. Madsen</name>
</author>
<author>
<name sortKey="Prickett, C A" uniqKey="Prickett C">C. A. Prickett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parente, J A" uniqKey="Parente J">J. A. Parente</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bottiroli, S" uniqKey="Bottiroli S">S. Bottiroli</name>
</author>
<author>
<name sortKey="Rosi, A" uniqKey="Rosi A">A. Rosi</name>
</author>
<author>
<name sortKey="Russo, R" uniqKey="Russo R">R. Russo</name>
</author>
<author>
<name sortKey="Vecchi, T" uniqKey="Vecchi T">T. Vecchi</name>
</author>
<author>
<name sortKey="Cavallini, E" uniqKey="Cavallini E">E. Cavallini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reaves, S" uniqKey="Reaves S">S. Reaves</name>
</author>
<author>
<name sortKey="Graham, B" uniqKey="Graham B">B. Graham</name>
</author>
<author>
<name sortKey="Grahn, J" uniqKey="Grahn J">J. Grahn</name>
</author>
<author>
<name sortKey="Rabannifard, P" uniqKey="Rabannifard P">P. Rabannifard</name>
</author>
<author>
<name sortKey="Duarte, A" uniqKey="Duarte A">A. Duarte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mackrill, J" uniqKey="Mackrill J">J. Mackrill</name>
</author>
<author>
<name sortKey="Jennings, P" uniqKey="Jennings P">P. Jennings</name>
</author>
<author>
<name sortKey="Cain, R" uniqKey="Cain R">R. Cain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Garza Villarreal, E A" uniqKey="Garza Villarreal E">E. A. Garza Villarreal</name>
</author>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E. Brattico</name>
</author>
<author>
<name sortKey="Vase, L" uniqKey="Vase L">L. Vase</name>
</author>
<author>
<name sortKey=" Stergaard, L" uniqKey=" Stergaard L">L. Østergaard</name>
</author>
<author>
<name sortKey="Vuust, P" uniqKey="Vuust P">P. Vuust</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Steele, K M" uniqKey="Steele K">K. M. Steele</name>
</author>
<author>
<name sortKey="Ball, T N" uniqKey="Ball T">T. N. Ball</name>
</author>
<author>
<name sortKey="Runk, R" uniqKey="Runk R">R. Runk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Radstaak, M" uniqKey="Radstaak M">M. Radstaak</name>
</author>
<author>
<name sortKey="Geurts, S A" uniqKey="Geurts S">S. A. Geurts</name>
</author>
<author>
<name sortKey="Brosschot, J F" uniqKey="Brosschot J">J. F. Brosschot</name>
</author>
<author>
<name sortKey="Kompier, M A" uniqKey="Kompier M">M. A. Kompier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hilz, M J" uniqKey="Hilz M">M. J. Hilz</name>
</author>
<author>
<name sortKey="Stadler, P" uniqKey="Stadler P">P. Stadler</name>
</author>
<author>
<name sortKey="Gryc, T" uniqKey="Gryc T">T. Gryc</name>
</author>
<author>
<name sortKey="Nath, J" uniqKey="Nath J">J. Nath</name>
</author>
<author>
<name sortKey="Habib Romstoeck, L" uniqKey="Habib Romstoeck L">L. Habib-Romstoeck</name>
</author>
<author>
<name sortKey="Stemper, B" uniqKey="Stemper B">B. Stemper</name>
</author>
<author>
<name sortKey="Buechner, S" uniqKey="Buechner S">S. Buechner</name>
</author>
<author>
<name sortKey="Wong, S" uniqKey="Wong S">S. Wong</name>
</author>
<author>
<name sortKey="Koehn, J" uniqKey="Koehn J">J. Koehn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tan, F" uniqKey="Tan F">F. Tan</name>
</author>
<author>
<name sortKey="Tengah, A" uniqKey="Tengah A">A. Tengah</name>
</author>
<author>
<name sortKey="Nee, L Y" uniqKey="Nee L">L. Y. Nee</name>
</author>
<author>
<name sortKey="Fredericks, S" uniqKey="Fredericks S">S. Fredericks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N. Kraus</name>
</author>
<author>
<name sortKey="Chandrasekaran, B" uniqKey="Chandrasekaran B">B. Chandrasekaran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spalek, K" uniqKey="Spalek K">K. Spalek</name>
</author>
<author>
<name sortKey="Fastenrath, M" uniqKey="Fastenrath M">M. Fastenrath</name>
</author>
<author>
<name sortKey="Ackermann, S" uniqKey="Ackermann S">S. Ackermann</name>
</author>
<author>
<name sortKey="Auschra, B" uniqKey="Auschra B">B. Auschra</name>
</author>
<author>
<name sortKey="Coynel, D" uniqKey="Coynel D">D. Coynel</name>
</author>
<author>
<name sortKey="Frey, J" uniqKey="Frey J">J. Frey</name>
</author>
<author>
<name sortKey="Gschwind, L" uniqKey="Gschwind L">L. Gschwind</name>
</author>
<author>
<name sortKey="Hartmann, F" uniqKey="Hartmann F">F. Hartmann</name>
</author>
<author>
<name sortKey="Van Der Maarel, N" uniqKey="Van Der Maarel N">N. van der Maarel</name>
</author>
<author>
<name sortKey="Papassotiropoulos, A" uniqKey="Papassotiropoulos A">A. Papassotiropoulos</name>
</author>
<author>
<name sortKey="De Quervain, D" uniqKey="De Quervain D">D. de Quervain</name>
</author>
<author>
<name sortKey="Milnik, A" uniqKey="Milnik A">A. Milnik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nater, U M" uniqKey="Nater U">U. M. Nater</name>
</author>
<author>
<name sortKey="Abbruzzese, E" uniqKey="Abbruzzese E">E. Abbruzzese</name>
</author>
<author>
<name sortKey="Krebs, M" uniqKey="Krebs M">M. Krebs</name>
</author>
<author>
<name sortKey="Ehlert, U" uniqKey="Ehlert U">U. Ehlert,</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Proverbio, A M" uniqKey="Proverbio A">A. M. Proverbio</name>
</author>
<author>
<name sortKey="Lozano Nasi, V" uniqKey="Lozano Nasi V">V. Lozano Nasi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lang, P J" uniqKey="Lang P">P. J. Lang</name>
</author>
<author>
<name sortKey="Bradley, M M" uniqKey="Bradley M">M. M. Bradley</name>
</author>
<author>
<name sortKey="Cuthbert, B N" uniqKey="Cuthbert B">B. N. Cuthbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Proverbio, A M" uniqKey="Proverbio A">A. M. Proverbio</name>
</author>
<author>
<name sortKey="La Mastra, F" uniqKey="La Mastra F">F. La Mastra</name>
</author>
<author>
<name sortKey="Adorni, R" uniqKey="Adorni R">R. Adorni</name>
</author>
<author>
<name sortKey="Zani, A" uniqKey="Zani A">A. Zani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yovel, G" uniqKey="Yovel G">G. Yovel</name>
</author>
<author>
<name sortKey="Paller, K A" uniqKey="Paller K">K. A. Paller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Curran, T" uniqKey="Curran T">T. Curran</name>
</author>
<author>
<name sortKey="Hancock, J" uniqKey="Hancock J">J. Hancock</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="J Ncke, L" uniqKey="J Ncke L">L. Jäncke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gold, B P" uniqKey="Gold B">B. P. Gold</name>
</author>
<author>
<name sortKey="Frankm, M J" uniqKey="Frankm M">M. J. Frankm</name>
</author>
<author>
<name sortKey="Bogertm, B" uniqKey="Bogertm B">B. Bogertm</name>
</author>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E. Brattico</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cohen, A J" uniqKey="Cohen A">A. J. Cohen</name>
</author>
<author>
<name sortKey="Juslin, P N" uniqKey="Juslin P">P. N. Juslin</name>
</author>
<author>
<name sortKey="Sloboda, J A" uniqKey="Sloboda J">J. A. Sloboda</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Iwamiya, S" uniqKey="Iwamiya S">S. Iwamiya</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sloboda, I A" uniqKey="Sloboda I">I. A. Sloboda</name>
</author>
<author>
<name sortKey="Iones, M R" uniqKey="Iones M">M. R. Iones</name>
</author>
<author>
<name sortKey="Holleran, S" uniqKey="Holleran S">S. Holleran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rigg, M G" uniqKey="Rigg M">M. G. Rigg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thayer, J F" uniqKey="Thayer J">J. F. Thayer</name>
</author>
<author>
<name sortKey="Levenson, R" uniqKey="Levenson R">R. Levenson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kalinak, K" uniqKey="Kalinak K">K. Kalinak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baumgartner, T" uniqKey="Baumgartner T">T. Baumgartner</name>
</author>
<author>
<name sortKey="Lutz, K" uniqKey="Lutz K">K. Lutz</name>
</author>
<author>
<name sortKey="Schmidt, C F" uniqKey="Schmidt C">C. F. Schmidt</name>
</author>
<author>
<name sortKey="J Ncke, L" uniqKey="J Ncke L">L. Jäncke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gerdes, A B M" uniqKey="Gerdes A">A. B. M. Gerdes</name>
</author>
<author>
<name sortKey="Wieser, M J" uniqKey="Wieser M">M. J. Wieser</name>
</author>
<author>
<name sortKey="Bublatzky, F" uniqKey="Bublatzky F">F. Bublatzky</name>
</author>
<author>
<name sortKey="Kusay, A" uniqKey="Kusay A">A. Kusay</name>
</author>
<author>
<name sortKey="Plichta, M M" uniqKey="Plichta M">M. M. Plichta</name>
</author>
<author>
<name sortKey="Alpers, G W" uniqKey="Alpers G">G. W. Alpers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jomori, I" uniqKey="Jomori I">I. Jomori</name>
</author>
<author>
<name sortKey="Hoshiyama, M" uniqKey="Hoshiyama M">M. Hoshiyama</name>
</author>
<author>
<name sortKey="Uemura, J" uniqKey="Uemura J">J. Uemura</name>
</author>
<author>
<name sortKey="Nakagawa, Y" uniqKey="Nakagawa Y">Y. Nakagawa</name>
</author>
<author>
<name sortKey="Hoshino, A" uniqKey="Hoshino A">A. Hoshino</name>
</author>
<author>
<name sortKey="Iwamoto, Y" uniqKey="Iwamoto Y">Y. Iwamoto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hanser, W E" uniqKey="Hanser W">W. E. Hanser</name>
</author>
<author>
<name sortKey="Mark, R E" uniqKey="Mark R">R. E. Mark</name>
</author>
<author>
<name sortKey="Zijlstra, W P" uniqKey="Zijlstra W">W. P. Zijlstra</name>
</author>
<author>
<name sortKey="Vingerhoets, Ad J J M" uniqKey="Vingerhoets A">Ad J. J. M. Vingerhoets</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jolij, J" uniqKey="Jolij J">J. Jolij</name>
</author>
<author>
<name sortKey="Meurs, M" uniqKey="Meurs M">M. Meurs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rule, N O" uniqKey="Rule N">N. O. Rule</name>
</author>
<author>
<name sortKey="Slepian, M L" uniqKey="Slepian M">M. L. Slepian</name>
</author>
<author>
<name sortKey="Ambady, N" uniqKey="Ambady N">N. Ambady</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bell, R" uniqKey="Bell R">R. Bell</name>
</author>
<author>
<name sortKey="Buchner, A" uniqKey="Buchner A">A. Buchner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johansson, M" uniqKey="Johansson M">M. Johansson</name>
</author>
<author>
<name sortKey="Mecklinger, A" uniqKey="Mecklinger A">A. Mecklinger</name>
</author>
<author>
<name sortKey="Treese, A C" uniqKey="Treese A">A. C. Treese</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keightley, M L" uniqKey="Keightley M">M. L. Keightley</name>
</author>
<author>
<name sortKey="Chiew, K S" uniqKey="Chiew K">K. S. Chiew</name>
</author>
<author>
<name sortKey="Anderson, J A E" uniqKey="Anderson J">J. A. E. Anderson</name>
</author>
<author>
<name sortKey="Grady, C L" uniqKey="Grady C">C. L. Grady</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, H J" uniqKey="Chen H">H. J. Chen</name>
</author>
<author>
<name sortKey="Chen, T Y" uniqKey="Chen T">T. Y. Chen</name>
</author>
<author>
<name sortKey="Huang, C Y" uniqKey="Huang C">C. Y. Huang</name>
</author>
<author>
<name sortKey="Hsieh, Y M" uniqKey="Hsieh Y">Y. M. Hsieh</name>
</author>
<author>
<name sortKey="Lai, H L" uniqKey="Lai H">H. L. Lai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tan, Y Z" uniqKey="Tan Y">Y. Z. Tan</name>
</author>
<author>
<name sortKey="Ozdemir, S" uniqKey="Ozdemir S">S. Ozdemir</name>
</author>
<author>
<name sortKey="Temiz, A" uniqKey="Temiz A">A. Temiz</name>
</author>
<author>
<name sortKey="Celik, F" uniqKey="Celik F">F. Celik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tsuchiya, M" uniqKey="Tsuchiya M">M. Tsuchiya</name>
</author>
<author>
<name sortKey="Asada, A" uniqKey="Asada A">A. Asada</name>
</author>
<author>
<name sortKey="Ryo, K" uniqKey="Ryo K">K. Ryo</name>
</author>
<author>
<name sortKey="Noda, K" uniqKey="Noda K">K. Noda</name>
</author>
<author>
<name sortKey="Hashino, T" uniqKey="Hashino T">T. Hashino</name>
</author>
<author>
<name sortKey="Sato, Y" uniqKey="Sato Y">Y. Sato</name>
</author>
<author>
<name sortKey="Sato, E F" uniqKey="Sato E">E. F. Sato</name>
</author>
<author>
<name sortKey="Inoue, M" uniqKey="Inoue M">M. Inoue</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rauscher, F H" uniqKey="Rauscher F">F. H. Rauscher</name>
</author>
<author>
<name sortKey="Shaw, G L" uniqKey="Shaw G">G. L. Shaw</name>
</author>
<author>
<name sortKey="Ky, K N" uniqKey="Ky K">K. N. Ky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Quarto, T" uniqKey="Quarto T">T. Quarto</name>
</author>
<author>
<name sortKey="Blasi, G" uniqKey="Blasi G">G. Blasi</name>
</author>
<author>
<name sortKey="Pallasen, K J" uniqKey="Pallasen K">K. J. Pallasen</name>
</author>
<author>
<name sortKey="Bertolino, A" uniqKey="Bertolino A">A. Bertolino</name>
</author>
<author>
<name sortKey="Brattico, E" uniqKey="Brattico E">E. Brattico,</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Etzel, J A" uniqKey="Etzel J">J. A. Etzel</name>
</author>
<author>
<name sortKey="Johnsen, E L" uniqKey="Johnsen E">E. L. Johnsen</name>
</author>
<author>
<name sortKey="Dickerson, J" uniqKey="Dickerson J">J. Dickerson</name>
</author>
<author>
<name sortKey="Tranel, D" uniqKey="Tranel D">D. Tranel</name>
</author>
<author>
<name sortKey="Adolphs, R" uniqKey="Adolphs R">R. Adolphs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Khalfa, S" uniqKey="Khalfa S">S. Khalfa</name>
</author>
<author>
<name sortKey="Roy, M" uniqKey="Roy M">M. Roy</name>
</author>
<author>
<name sortKey="Rainville, P" uniqKey="Rainville P">P. Rainville</name>
</author>
<author>
<name sortKey="Dalla Bella, S" uniqKey="Dalla Bella S">S. Dalla Bella</name>
</author>
<author>
<name sortKey="Peretz, I" uniqKey="Peretz I">I. Peretz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Withvliet, C V O" uniqKey="Withvliet C">C. V. O. Withvliet</name>
</author>
<author>
<name sortKey="Vrana, S R" uniqKey="Vrana S">S. R. Vrana</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sci Rep</journal-id>
<journal-id journal-id-type="iso-abbrev">Sci Rep</journal-id>
<journal-title-group>
<journal-title>Scientific Reports</journal-title>
</journal-title-group>
<issn pub-type="epub">2045-2322</issn>
<publisher>
<publisher-name>Nature Publishing Group</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26469712</article-id>
<article-id pub-id-type="pmc">4606564</article-id>
<article-id pub-id-type="pii">srep15219</article-id>
<article-id pub-id-type="doi">10.1038/srep15219</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Mado Proverbio</surname>
<given-names>C.A. Alice</given-names>
</name>
<xref ref-type="corresp" rid="c1">a</xref>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Lozano Nasi</surname>
<given-names>Valentina</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Alessandra Arcari</surname>
<given-names>Laura</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>De Benedetto</surname>
<given-names>Francesco</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Guardamagna</surname>
<given-names>Matteo</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Gazzola</surname>
<given-names>Martina</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Zani</surname>
<given-names>Alberto</given-names>
</name>
<xref ref-type="aff" rid="a2">2</xref>
</contrib>
<aff id="a1">
<label>1</label>
<institution>Milan-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca</institution>
Piazza dell’Ateneo Nuovo 1 Milan, 20126,
<country>Italy</country>
</aff>
<aff id="a2">
<label>2</label>
<institution>IBFM-CNR, Via Fratelli Cervi</institution>
,
<country>Milan,</country>
20090,
<country>Italy</country>
</aff>
</contrib-group>
<author-notes>
<corresp id="c1">
<label>a</label>
<email>mado.proverbio@unimib.it</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>15</day>
<month>10</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>5</volume>
<elocation-id>15219</elocation-id>
<history>
<date date-type="received">
<day>04</day>
<month>06</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>18</day>
<month>09</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2015, Macmillan Publishers Limited</copyright-statement>
<copyright-year>2015</copyright-year>
<copyright-holder>Macmillan Publishers Limited</copyright-holder>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<pmc-comment>author-paid</pmc-comment>
<license-p>This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
</license-p>
</license>
</permissions>
<abstract>
<p>The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding.</p>
</abstract>
</article-meta>
</front>
<body>
<p>
<list id="l1" list-type="bullet">
<list-item>
<p>A total of 54 non-musicians listened to joyful or emotionally touching music, rain sounds, or silence while studying hundreds of faces</p>
</list-item>
<list-item>
<p>Heart rate and diastolic and systolic blood pressure were measured during face encoding</p>
</list-item>
<list-item>
<p>Except for emotionally touching music, auditory background interfered with memory recall</p>
</list-item>
<list-item>
<p>Touching music is able to bind visual properties with emotionally charged information (music), which results in enhanced memory</p>
</list-item>
</list>
</p>
<p>The effects of listening to background music on concurrent mental processing are controversial. Overall, although listening to music appears to have positive effects on emotions and especially on motor behavior (such as athletic performance), it appears to interfere with reading and memory tasks (see ref.
<xref ref-type="bibr" rid="b1">1</xref>
for a review). In automobile driving, listening to music appears to alleviate driver stress and reduce aggression; however, in conditions that require attention and mental concentration, driving performance is impaired
<xref ref-type="bibr" rid="b2">2</xref>
.</p>
<p>Two perspectives have been proposed to account for the effects of background music on cognitive processes: the
<italic>Cognitive-Capacity model</italic>
and the
<italic>Arousal-Mood hypothesis</italic>
. Kahneman’s capacity model
<xref ref-type="bibr" rid="b3">3</xref>
postulates that only a limited pool of resources is available for cognitive processing at any given moment. When concurrent tasks compete for limited resources and their combined demands exceed the available capacity,
<italic>capacity interference</italic>
occurs. Only a portion of the task information is processed and therefore performance deteriorates. The interference caused by task-irrelevant information (for example, listening to music) also depends on the complexity of the information that is being processed and on the workload that is required to process task-relevant information. Indeed, increasingly complex musical distractions may result in decreased cognitive performance
<xref ref-type="bibr" rid="b4">4</xref>
.</p>
<p>In contrast, the
<italic>Arousal-Mood</italic>
hypothesis posits that listening to music affects task performance by positively influencing arousal and mood
<xref ref-type="bibr" rid="b5">5</xref>
, which is a phenomenon that is also known as the Mozart effect
<xref ref-type="bibr" rid="b6">6</xref>
. This hypothesis has been supported by several studies that have investigated the effect of listening to background music on the performance of cognitive tasks. For example, improvements in verbal memory encoding
<xref ref-type="bibr" rid="b7">7</xref>
, autobiographical memory in Alzheimer patients
<xref ref-type="bibr" rid="b8">8</xref>
, verbal and visual processing speed
<xref ref-type="bibr" rid="b9">9</xref>
, arithmetic skill
<xref ref-type="bibr" rid="b10">10</xref>
, reading
<xref ref-type="bibr" rid="b11">11</xref>
, and second language learning
<xref ref-type="bibr" rid="b12">12</xref>
have been documented.</p>
<p>Conversely, reduced performance in the presence of background music has also been demonstrated (for example, see ref.
<xref ref-type="bibr" rid="b5">5</xref>
). As noted by Kämpfe and colleagues
<xref ref-type="bibr" rid="b1">1</xref>
in an excellent meta-analysis, background music may have a small but persistent negative effect on memory performance-related tasks, such as memorizing advertisements (e.g., ref.
<xref ref-type="bibr" rid="b13">13</xref>
), memorizing nonsense syllables or words (especially in the presence of loud music)
<xref ref-type="bibr" rid="b14">14</xref>
<xref ref-type="bibr" rid="b15">15</xref>
, remembering previously read texts and reading performance
<xref ref-type="bibr" rid="b16">16</xref>
. Listening to background music vs. silence has also been reported to interfere with many additional cognitive processes, including the ability to perform arithmetic
<xref ref-type="bibr" rid="b17">17</xref>
; performance on verbal, numerical and diagrammatic analysis tests
<xref ref-type="bibr" rid="b18">18</xref>
<xref ref-type="bibr" rid="b19">19</xref>
; multimedia learning
<xref ref-type="bibr" rid="b20">20</xref>
; the learning of new procedures
<xref ref-type="bibr" rid="b21">21</xref>
; reading
<xref ref-type="bibr" rid="b22">22</xref>
<xref ref-type="bibr" rid="b23">23</xref>
<xref ref-type="bibr" rid="b24">24</xref>
; and inhibition of performance of the Stroop task
<xref ref-type="bibr" rid="b25">25</xref>
.</p>
<p>Recently, Bottiroli
<italic>et al.</italic>
<xref ref-type="bibr" rid="b26">26</xref>
found that listening to Mozart (as compared to silence and white noise) improved declarative memory tasks in the elderly. They interpreted these data in the context of the so-called “arousal and mood hypothesis”
<xref ref-type="bibr" rid="b5">5</xref>
because performance systematically increased under conditions that induced positive mood and arousal. In contrast, Reaves
<italic>et al.</italic>
<xref ref-type="bibr" rid="b27">27</xref>
indicated that listening to music comes at a cost to concurrent cognitive functioning. In their study, both young and old adults listened to music or to silence while simultaneously studying face-name pairs. The participants’ abilities to remember the pairs were then tested as they listened to either the same or to different music. The results showed that older adults remembered 10% fewer names when listening to background music or to rain compared to silence. Therefore, although music may help to relax individuals who are trying to concentrate, it appears that it does not help them to remember what they are focusing on (new information), especially as they age.</p>
<p>Overall, the data are conflicting, although it appears that listening to background music most interferes with tasks that involve memory, especially for verbal items. To the best of our knowledge, the effect of listening to music on the ability to remember nonverbal or linguistic items has not been previously investigated.</p>
<p>To further investigate this matter, in this study, the ability to remember human faces was evaluated in the context of different types of acoustic background, including silence (as a non-interfering control), the sounds of rain and storms (generally thought to have a relaxing effect
<xref ref-type="bibr" rid="b28">28</xref>
<xref ref-type="bibr" rid="b29">29</xref>
), and occidental music of different emotional content and style. A previous study
<xref ref-type="bibr" rid="b30">30</xref>
compared listening to silence with listening to music or rain during a backward digit span task and found no effect of auditory background on performance. To provide information about the effect of background noise on alertness and arousal levels, as well as possible autonomic correlates of emotional responses, heart rate and systolic and diastolic blood pressure were measured during the first part of the experiment (study phase). In this session, 300 unknown male and female faces were presented to participants in an explicit memory encoding situation. The study session was followed by a memory test that consisted of evaluating the recognition of 200 previously viewed faces that were randomly interspersed with 100 new faces, under conditions of silence. Hit and error percentages were quantified as functions of the experimental conditions (listening to emotionally touching music, to joyful music, to the sound of rain, to silence).</p>
<p>The aim of the present study was to determine the autonomic and cognitive correlates of non-verbal memory processing as a function of the nature of an auditory background (or the lack of it, i.e., silence). It was hypothesized that music would either increase arousal levels and therefore improve memory, as predicted by the Mozart effect
<xref ref-type="bibr" rid="b5">5</xref>
<xref ref-type="bibr" rid="b6">6</xref>
, or that it would interfere with memory by overloading attentional systems and therefore reduce subjects’ performance of the memory task
<xref ref-type="bibr" rid="b4">4</xref>
. The effect imparted by the emotional content of music was also explored by comparing the condition of listening to joyful music with that of listening to emotionally touching (sad) music. With respect to the effects of listening to music on autonomic parameters, it appears that although listening to music might reduce anxiety and induce mental relaxation under certain experimental conditions or clinical settings, it has little or no influence on hemodynamic parameters, except for a tendency to increase systolic blood pressure
<xref ref-type="bibr" rid="b31">31</xref>
<xref ref-type="bibr" rid="b32">32</xref>
<xref ref-type="bibr" rid="b33">33</xref>
.</p>
<p>In this study autonomic measures were recorded in non-musician controls since we aimed at investigating the effect of auditory background on perceptual processing in individuals not particularly specialized in music processing. Indeed, it is known that the musicians’ brain reacts differently from that of other individuals to auditory information of various nature, including phonologic stimuli, noise and sounds
<xref ref-type="bibr" rid="b34">34</xref>
. Although not a specific research aim, the effect of participants’ sex in memory for face (as a function of auditory background) was also investigated since some literature has shown gender effects on episodic memory and musical emotion processing. Indeed evidences have been provided of a female greater advantage in episodic memory tasks for emotional stimuli
<xref ref-type="bibr" rid="b35">35</xref>
and of females’ hypersensitivity to aversive musical stimuli
<xref ref-type="bibr" rid="b36">36</xref>
.</p>
<sec disp-level="1" sec-type="materials|methods">
<title>Materials and Methods</title>
<sec disp-level="2">
<title>Participants</title>
<p>Fifty-four healthy participants (27 males and 27 females), ranging in age between 18 and 28 years (mean age = 22.277 years), were included in this study. They were all right-handed with normal hearing and vision and none had suffered from previous or current psychiatric or neurological diseases. All participants received academic credits for their participation and provided written consent. The experiment was performed in accordance with the relevant guidelines and regulations of and was approved by the Ethical Committee of the University of Milano-Bicocca. The participants were blinded to the purpose of the experiment. None of the participants were musicians, and none of them had ever studied music or played a musical instrument, or had a musical activity as a hobby or specific interest. This information was specifically ascertained through the administration of a detailed questionnaire.</p>
</sec>
<sec disp-level="2">
<title>Stimuli</title>
<sec disp-level="3">
<title>Visual stimuli</title>
<p>A total if 448 colored pictures of anonymous human faces of women (N = 224) and men (N = 224) of different ages were used as visual stimuli. The faces were selected from available, open access, license-free databases. They were equally represented by sex and age ranges (children, adolescents, young adults [25–35 years], mature adults [35–60 years] and the elderly). The pictures only showed a given subject's face up to the base of the neck. The image size was 265 × 332 pixels. The characters wore various accessories (e.g., glasses, hats, earrings, etc.) and depicted various emotional expressions, ranging from joy to anger, all of which were matched across stimulus categories. The valence and arousal of each face was assessed in a preliminary validation study that was performed on 15 Psychology University students
<xref ref-type="bibr" rid="b37">37</xref>
through the administration of a modified version of the
<italic>Self Assessment Manikin</italic>
(SAM)
<xref ref-type="bibr" rid="b38">38</xref>
an affective rating system. In this system, a graphic figure depicting values along each of 2 dimensions of a continuously varying scale is used to indicate emotional reactions. Judges could select any of the 3 figures comprising each scale, which resulted in a 0–2 point rating scale for each dimension. Ratings were scored such that 2 represented a high rating on each dimension (i.e., high arousal, positivity), and 0 represented a low rating on each dimension (i.e., low arousal, negativity), with 1 representing an intermediate score. On the basis of the valence and arousal ratings that were obtained from the validation study
<xref ref-type="bibr" rid="b37">37</xref>
, faces were randomly assigned to various auditory background conditions, which were also matched for sex and age of the persons depicted, so that the average valence and arousal of faces did not differ across blocks of stimulations. Stimuli were equiluminant, as ascertained by an ANOVA that was performed on individual luminance measurements that were obtained via a Minolta luminance meter.</p>
</sec>
<sec disp-level="3">
<title>Auditory stimuli</title>
<p>Pieces of music were selected on the basis of a validation that was performed on 20 orchestra directors, composers or teachers at various Italian Conservatories (18 men and 2 women), whose mean age varied between 50 and 60 years and who freely provided lists of the most emotionally touching classic instrumental music pieces from a tonal and atonal repertoire. Tonal music was defined as a musical production having a tonal center around which melody and harmony are based, including monodic productions from the Middle Ages. Atonal music was defined as a musical production (usually dated after 1910) that did not have a tonal center or that did not use multiple tonal centers simultaneously. After an initial selection, movie soundtracks, opera pieces and highly popular pieces of music were discarded. The selected list was then re-presented to the judges while asking them to choose the 3 most emotionally touching and 3 most joyful pieces (according to their own aesthetic preferences) for the tonal and atonal categories. The pieces with the highest ratings across the tonal and atonal repertoires were considered, and their similarities in structure, rhythm and in ensemble complement/instrumentation were also taken into account. In the end, the pieces that were voted to be used as stimuli in this study including the following:
<list id="l2" list-type="bullet">
<list-item>
<p>
<italic>Part, Arvo - Cantus in memoriam de Benjamin Britten (</italic>
atonal, touching)</p>
</list-item>
<list-item>
<p>
<italic>Hindemith, Paul - First movement from I Kammermusik</italic>
(atonal, joyful)</p>
</list-item>
<list-item>
<p>
<italic>Bach, Johann Sebastian - II movement from Concert in D minor for 2 violins (BWV 1043)</italic>
(tonal, touching)</p>
</list-item>
<list-item>
<p>
<italic>Beethoven, Ludwig van - IV Movement of Symphony</italic>
(tonal, joyful)</p>
</list-item>
</list>
</p>
<p>Both rain sounds (obtained from an audio downloaded from Internet named “75 minutes of thunder and rain - relaxing noise for your ears”
<ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=WvRv-243Cmk&spfreload=10">https://www.youtube.com/watch?v=WvRv-243Cmk&spfreload=10</ext-link>
) and all musical excerpts were cut into 1-minute-long pieces, matched for intensity by means of
<italic>MP3Gain</italic>
software (89,0 dB) and faded at the end (last second) via
<italic>Audacity</italic>
software, and were therefore transformed into MP3 files. The modulation of tonality (used to provide variety) and its possible effect on autonomic parameters were not considered in this study.</p>
</sec>
</sec>
<sec disp-level="2">
<title>Procedure</title>
<p>The experiment consisted of two different sessions (see
<xref ref-type="fig" rid="f1">fig. 1</xref>
for a schematic of the paradigm). It was preceded by a training phase in which 56 unique pictures of women and men of various ages were presented in association with an auditory background. The subjects were instructed to pay attention to the faces that were presented in the 2 training sequences, which were followed by a short old/new discrimination task in which the left and right hands were alternated between when being used to respond. The first training session consisted of presenting 20 different faces to subjects as they listened to jazz music from “Freedom: an Instrumental approach to Jazz music” (n°12,
<italic>Wet atmosphere</italic>
, Julian Carl, JC Records, 2014). In the second training session, in which the opposite responding hand was used, an additional 20 faces were presented with an auditory background of natural sounds (ocean waves). In both sessions, subject wore headphones and wrist devises (on the left hand) that measured heart rate and blood pressure. The third training sequence consisted of the presentation of 8 old faces and 8 new faces that were randomly mixed. In this sequence, the subjects did not wear headphones or devises that measured autonomic responses. The auditory background was complete silence. The participants were instructed to respond as accurately and quickly as possible, using the index finger to indicate old faces and the middle finger to indicate new faces.</p>
<p>After subjects were acquainted with the task requirements and experimental settings, the experimental session started.</p>
<p>In the study or learning session, participants sat comfortably in front of a computer screen at a distance of 114 cm in an anechoic chamber under dimly lit conditions. A total of 300 faces were randomly presented at the center of the screen for 800 ms each with an ISI of 1300 ms. The stimuli were equally divided as a function of auditory background conditions (each auditory clip lasting 60 seconds) and matched across categories for sex, age, expression, valence and arousal. Stimulus delivery was performed using
<italic>Evoke software (Asa System)</italic>
. Subjects wore headphones and wrist devises that measured heart rate and blood pressure.</p>
<p>The session consisted in the presentation of 15 sequences separated by short pauses. The order of stimulus presentation was random and randomly varied across subjects. The wrist devices that measured heart rate and blood pressure was activated at the beginning of each sequence and stopped at the end. In this way it was assured a good timing of physiological responses to the cognitive stimuli.</p>
<p>During the memory test session, participants were presented with 300 faces (200 old and 100 new) in silent conditions and without wearing headphones or wrist devises for autonomic response measurements. The faces were presented for 800 ms each with an ISI of 1300 ms. The task requirements were the same as in the training memory task: old/new face discrimination with finger choice and response time and hits recording.</p>
</sec>
<sec disp-level="2">
<title>Physiological recording</title>
<p>Data were not sampled but continuously acquired. Since each auditory fragment lasted 1 minute, both heart rate and blood pressure were acquired and averaged as per minute value and processed after the end of recording. Xanthine intake (i.e. caffeine) was controlled. All subjects were tested in the morning, and had no more than 1 breakfast coffee. None of them have been assuming medications affecting the SNC in the last 2 weeks. It was ascertained that no physical exercise was practiced by participants before the experiment, and participants were required to rest sitting down before the study for about 10 minutes to achieve a basal state. At this aim, 3 sequences of training were administered to all subjects before the beginning of the recording session.</p>
</sec>
<sec disp-level="2">
<title>Data Analysis</title>
<p>The mean percentages of correct responses (arcsine transformed), response times (RTs in ms), and the mean values of heart rate and diastolic and systolic blood pressure that were measured during the learning session underwent five independent analyses of variance, with between group (sex, 2 levels: male, female) and within group (auditory background, 4 levels: joyful music, touching music, rain, silence) factors of variability.</p>
<p>A further analysis of variance was performed to compare the recognition rates of old and new faces that were measured during the test session. Between group (sex, 2 levels: male, female) and within group (face familiarity, 2 levels: old, new) factors of variability were included. Tukey’s test was used for post hoc comparisons of means.</p>
</sec>
</sec>
<sec disp-level="1" sec-type="results">
<title>Results</title>
<p>
<xref ref-type="fig" rid="f2">Figure 2</xref>
indicates the mean percentages of correct recognition of old faces (along with standard deviations) as a function of the sex of the viewers and the auditory background conditions. Although women tended to exhibit better performance on the task, sex was not a significant factor. ANOVA results indicated the significance of auditory background (F3,156 = 5.9; p < 0.0008), and higher percentages of correct facial recognition were obtained under auditory backgrounds comprised of emotionally touching music (vs. joyful music, p < 0.01; vs. rain, p < 0.007) or of silence (vs. joyful music, p < 0.03; vs. rain, p < 0.02) compared to other conditions.</p>
<p>An ANOVA that was performed to compare hits of old versus new faces independent of auditory background showed a strong effect of stimulus familiarity (F1,52 = 33.3; p < 0.00001), which indicated that the test subjects had a much higher recognition of new faces (correctly rejected as unfamiliar) than old faces (correctly recognized as old), as displayed in
<xref ref-type="fig" rid="f3">Fig. 3</xref>
. No sex differences in performance were observed.</p>
<p>An ANOVA that was performed on RTs indicated the significance of auditory background (F3,156 = 5.1, p < 0.0022); test subjects exhibited faster RTs to faces that were studied in the presence of an auditory background of emotionally touching music (vs. joyful music, p < 0.04; vs. rain, p < 0.04) or silence (vs. joyful music, p < 0.001; vs. rain, p < 0.001) compared to other conditions.
<xref ref-type="fig" rid="f4">Figure 4</xref>
displays the mean RTs corresponding to correct recognition of old faces as a function of the presence of auditory background during learning.</p>
<p>An ANOVA that was performed to assess heart rate measurements indicated the significance of auditory background (F3,156 = 3.5; p < 0.018). Post hoc comparisons showed that the test subjects exhibited significantly faster heart rates while listening to emotionally touching music compared to rain (p < 0.026) or silence (p < 0.006); this was also true in the case of listening to emotionally touching versus joyful music (p < 0.06). Listening to joyful music also tended to enhance heart rate (p < 0.06) compared to silence, as displayed in
<xref ref-type="fig" rid="f5">Fig. 5</xref>
.</p>
<p>A statistical analysis of diastolic blood pressure (diaBLP) measurements demonstrated an effect of sex (F1, 52 = 11.1; p < 0.0016), with lower diaBLP values in women (74.5 mmHg; SE 1.36) than men (8.1 mmHg; SE = 1.36). The effect of auditory background did not reach statistical significance (F3.156 = 0.08) but diaBLP tended to increase when subjects listened to emotionally touching (p < 0.07) and joyful (p < 0.08) music, as compared to sounds of rain or silence. The means and standard deviations corresponding to the above analyses are shown in
<xref ref-type="fig" rid="f6">Fig. 6</xref>
.</p>
<p>An ANOVA that was performed on systolic blood pressure (sysBDP) measurements indicated a significant effect corresponding to sex (F1,52 = 32.3; p < 0.000001), with much lower sysBDP values in women (112.6 mmHg, SE = 1.96) than men (128.5 mmHg, SE = 1.96), and no significant effect caused by auditory background, as displayed in
<xref ref-type="fig" rid="f7">Fig. 7</xref>
.</p>
</sec>
<sec disp-level="1" sec-type="discussion">
<title>Discussion</title>
<p>The aim of this study was to investigate how exposing subjects to varying auditory backgrounds while they engaged in a memory task affected later recognition performance. Response times were significantly faster and the recognition rate was higher for faces that were studied either in complete silence or in the presence of emotionally touching background music. Behavioral data demonstrated a higher recognition rate for new faces (correctly rejected as unfamiliar in 77.3% of hits with a 22.7% rate of error) than old faces (correctly recognized as familiar in 55.5% of hits with a 44.5% rate of error). This pattern is in accordance with previous literature, and it indicates that regardless of the large number of faces that were presented (N = 448), participants were able to accurately reject approximately 4 out of 5 new faces on the basis of a lack of familiarity. In a previously conducted electrophysiological study
<xref ref-type="bibr" rid="b39">39</xref>
, the recognition rate of 200 total faces was compared to that of 100 new faces and produced 79.3% correct hits for the former and 66.3% for the latter. Similarly, Yovel and Paller
<xref ref-type="bibr" rid="b40">40</xref>
obtained hit rates of 87.8% for new faces and 65.3% for old faces. In the ERP study that was conducted by Currand and Hancock
<xref ref-type="bibr" rid="b41">41</xref>
and included 360 faces, the memory recognition rates were approximately 90% for new faces and 81% for old faces. Considering that there were a greater number of stimuli in the present study, the performance was satisfactory, especially with respect to new faces. Overall, the recognition rate for old faces was a little bit lower in this than in other studies not featuring an interfering auditory background. Learning conditions were purposely made difficult to overload cognitive and perceptual systems and to determine whether the effects of listening to music and rain on visual learning were disruptive or enhancing.</p>
<p>Overall, the results of this study demonstrated that subjects more accurately encoded faces while listening to emotionally touching music compared to listening to rain or joyful music, similarly to conditions of silence. The most plausible explanation for this enhancement (or lack of interference) is that listening to emotionally touching music increased the arousal of the listeners, which was indicated by their increased heart rates. However, the arousal hypothesis does not hold true in this case per se because heart rates were also increased while listening to joyful music (which was associated with an increased number of facial recognition errors). Furthermore, listening to music generally tended to increase blood pressure compared to listening to rain sounds. The significant cost of listening to joyful music, which had the same intensity (in dB) as the emotionally touching music and rain sounds, must therefore not be interpreted as a lack of arousal activation but rather as lacking the beneficial effect that is imparted by musically-induced emotions on the ability to encode faces. Therefore, a hypothesis can be proposed that suggests that listening to emotionally touching music leads to emotionally-driven audiovisual encoding that strengthens memory engrams for faces that are visualized in this context, whereas listening to either rain or joyful music produces interfering effects by overloading perceptual channels during face encoding, as predicted by numerous studies that have described the persistent negative effects of listening to music on memory performance
<xref ref-type="bibr" rid="b1">1</xref>
<xref ref-type="bibr" rid="b4">4</xref>
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
<xref ref-type="bibr" rid="b15">15</xref>
<xref ref-type="bibr" rid="b17">17</xref>
<xref ref-type="bibr" rid="b18">18</xref>
<xref ref-type="bibr" rid="b19">19</xref>
<xref ref-type="bibr" rid="b20">20</xref>
<xref ref-type="bibr" rid="b21">21</xref>
<xref ref-type="bibr" rid="b22">22</xref>
<xref ref-type="bibr" rid="b23">23</xref>
<xref ref-type="bibr" rid="b24">24</xref>
<xref ref-type="bibr" rid="b25">25</xref>
. Indeed, according to Jancke
<xref ref-type="bibr" rid="b42">42</xref>
(2008), “nostalgic music” has a strong influence on episodic memory. A recent study by Gold
<italic>et al.</italic>
<xref ref-type="bibr" rid="b43">43</xref>
investigated the effects of music on mnemonic capacity. In this study, music that was considered to be pleasant by subjects was contrasted with emotionally neutral music; both types of music were listened to by musician and non-musician subjects. During music listening, participants were engaged in encoding and later recalling Japanese ideograms. The results showed that subjects with musical expertise exhibited better performance on memory tasks while listening to neutral music, whereas subjects with no musical training (as in our study) more successfully memorized the studied ideograms while listening to emotionally touching music. These group differences might be interpreted by assuming that musicians dedicate more cognitive and attentional resources to the technical analysis of a preferred song and its musical properties. Conversely, the better performance at ideogram recall that was exhibited by non-musically trained participants as they listened to emotionally pleasant music might be due to increased attentional and arousal levels that were stimulated by the music. Indeed, numerous studies support the hypothesis that musical perception is able to modify how the brain processes visual information, which is the same principle that underlies the concept of the movie soundtrack
<xref ref-type="bibr" rid="b44">44</xref>
<xref ref-type="bibr" rid="b45">45</xref>
<xref ref-type="bibr" rid="b46">46</xref>
<xref ref-type="bibr" rid="b47">47</xref>
<xref ref-type="bibr" rid="b48">48</xref>
. In this case, music can strongly influence the interpretation of a film narrative by becoming integrated into the memory along with visual information and therefore it provides continuation, directs attention, induces mood, communicates meaning, cues memory, creates a sense of reality, and contributes to the aesthetic experience
<xref ref-type="bibr" rid="b44">44</xref>
. Furthermore, music can convey various types of emotional information via its harmony, rhythm, melody, timbre, and tonality, which can inspire multiple types of emotions in the listener, both simultaneously and in succession
<xref ref-type="bibr" rid="b49">49</xref>
.</p>
<p>The ability of emotional sounds to influence visual perception has been shown experimentally for stimuli such as complex IAPS (
<italic>International Affective Picture System</italic>
) scenes
<xref ref-type="bibr" rid="b50">50</xref>
<xref ref-type="bibr" rid="b51">51</xref>
, photographs of faces and landscapes
<xref ref-type="bibr" rid="b52">52</xref>
, emotional facial expressions
<xref ref-type="bibr" rid="b53">53</xref>
and schematics of faces embedded in noise
<xref ref-type="bibr" rid="b54">54</xref>
. In particular, with regard to faces, it has been shown that subjects were more accurate at detecting sub-threshold happy faces while listening to happy music and vice versa for sad faces and sad music. This suggests that music is able to modulate visual perception by altering early visual cortex activity and sensory processing in a binding modality
<xref ref-type="bibr" rid="b54">54</xref>
. In a separate study
<xref ref-type="bibr" rid="b53">53</xref>
, participants rated photographs of crying, smiling, angry and yawning faces while concurrently being exposed to happy, angry, sad and calm music, or no music, and the results indicated that the participants made more favorable judgments about a crying face when listening to either sad or calm background music. Based on the current literature, it can be hypothesized that listening to music, especially emotionally touching music, might alter the visual perception of faces by making perceived faces (that were balanced for intensity and valence as uni-sensory visual stimuli) more emotionally charged and arousing to the viewer via a mechanism of audiovisual encoding. The higher arousal value of faces that were perceived in the presence of emotionally touching music (and to a lesser extent joyful music) was indicated by the increased heart rates of the participants that were measured under this condition. The result that higher hit rates were achieved with respect to emotive faces when participants were listening to emotionally touching music is compatible with current neuroscientific literature on facial memory. For example, untrustworthy faces are better remembered than trustworthy faces
<xref ref-type="bibr" rid="b55">55</xref>
; disgusting faces are better remembered than non-disgusting faces
<xref ref-type="bibr" rid="b56">56</xref>
; and faces expressing negative emotions are better remembered than neutral faces
<xref ref-type="bibr" rid="b57">57</xref>
<xref ref-type="bibr" rid="b58">58</xref>
. It is thought that this type of enhanced facial memory is due to a more general advantage that is imparted by remembering faces that are representative of negative or threatening contexts, and it is associated with increased activity in the amygdala, hippocampus, extrastriate, and frontal and parietal cortices during facial encoding
<xref ref-type="bibr" rid="b58">58</xref>
. A similar phenomenon might occur when faces are perceived in an arousing or emotional context (e.g., in a thriller movie with a scary soundtrack or in our study on listening to emotionally touching music). In others words, music might strengthen memory engrams by enhancing affective coding and enabling multimodal, redundant audiovisual memory encoding.</p>
<p>Although auditory background heavily affected memory accuracy and heart rate, it appeared to have little effect on blood pressure, except for a slight tendency to increase it during music listening. With regard to the effect of music listening on autonomic responses (blood pressure, heart rate, and respiratory rate), the literature is very conflicting. Although it has been shown that listening to music can reduce pain intensity and systolic blood pressure in patients during postoperative recovery
<xref ref-type="bibr" rid="b59">59</xref>
and can reduce stress levels and heart rate in patients with coronary heart disease and cancer
<xref ref-type="bibr" rid="b60">60</xref>
, a reduction in heart rate or blood pressure caused by listening to music has not been demonstrated in healthy controls. For example, in a study conducted by Radstaak and colleagues
<xref ref-type="bibr" rid="b31">31</xref>
, healthy participants had to perform a mental arithmetic task while being exposed to harassment to induce stress. Afterward, participants were assigned to one of several “recovery” conditions in which they either (1) listened to self-chosen relaxing or happy music, listened to an audio book, or sat in silence. Systolic blood pressure, diastolic blood pressure, and heart rate were continuously monitored. The results indicated that although listening to both relaxing and happy music improved subjects moods, it did not diminish stress-enhanced systolic blood pressure. Therefore, mental relaxation was not associated with an improvement in autonomic parameters. In another interesting study
<xref ref-type="bibr" rid="b32">32</xref>
, systolic and diastolic blood pressure (BPsys, BPdia) were monitored as participants sat in silence and as they listened to 180-second-long recordings of two different “relaxing” and two different “aggressive” classical music excerpts. The results showed that listening to relaxing classical music and listening to aggressive classical music both increased BPsys, whereas autonomic modulation was lower under conditions of silence. Furthermore, in a study by Tan
<italic>et al.</italic>
<xref ref-type="bibr" rid="b33">33</xref>
, the effect of relaxing music on heart rate recovery after exercise was investigated. Twenty-three healthy young volunteers underwent treadmill exercise and were then assessed for heart rate recovery and subjected to saliva analysis. The participants were either exposed to sedating music or to silence during the recovery period immediately following the exercise. No differences were found between exposure to music or silence with respect to heart rate recovery, resting pulse rate, or salivary cortisol. Overall, it appeared that although listening to music reduced anxiety under certain experimental settings, it did not seem to strongly influence hemodynamic parameters, except for a tendency to increase systolic blood pressure, which is consistent with the results of the present study.</p>
<p>In this study, accuracy and RT data indicated that participants committed more errors and were much slower when learning occurred under a background of rain sounds (vs. emotional music or silence). Although it is thought that listening to natural sounds (e.g., sounds of a rippling ocean, a small stream, soft wind, or a bird twittering) may produce relaxing and anxiety-reducing effects
<xref ref-type="bibr" rid="b61">61</xref>
, it has not been demonstrated that benefits to the learning process are imparted by listening to such sounds while studying and memory encoding. For example, a study that compared listening to silence versus listening to music or rain sounds during a backward digit span task found that auditory background produced no effect on performance whatsoever
<xref ref-type="bibr" rid="b30">30</xref>
. A study on the perception of white noise
<xref ref-type="bibr" rid="b62">62</xref>
, which shares several auditory properties with rain sounds (expect for artificiality), showed that listening to natural sounds (a running horse) and music tones decreased the ability of subjects to recall memories of scenes from their daily lives (compared to a condition of silence), whereas listening to white noise improved memory performance by improving connectivity between brain regions that are associated with visuospatial processing, memory and attention modulation. These results can be interpreted by assuming that the perception of recognizable and structured auditory objects (natural or musical sounds) interferes with memory processing, which is in agreement with the cognitive capacity model
<xref ref-type="bibr" rid="b4">4</xref>
. Conversely, listening to unstructured white noise does not produce such interference and alternatively increases cerebral arousal levels, in agreement with the arousal hypothesis
<xref ref-type="bibr" rid="b5">5</xref>
. In this context, the rain sounds and types of music that were used in the present investigation overloaded the perceptual systems of the participants, as shown by their reduced levels of performance on the assigned tasks compared to a condition of silence. However, listening to emotionally touching music benefitted concurrent emotional processing (associated with significantly increased heart rate), in agreement with a study conducted by Gold
<italic>et al.</italic>
<xref ref-type="bibr" rid="b43">43</xref>
and Quarto
<italic>et al.</italic>
<xref ref-type="bibr" rid="b63">63</xref>
In support of this hypothesis, several studies have provided evidence that listening to pleasant or emotionally arousing music can increase the heart rate of the listener
<xref ref-type="bibr" rid="b64">64</xref>
<xref ref-type="bibr" rid="b65">65</xref>
<xref ref-type="bibr" rid="b66">66</xref>
. Overall, the data indicate that perception of emotionally touching music can modify visual perception by binding visual inputs with emotionally charged musical information, resulting in deeper memory encoding.</p>
<p>One of the possible study’s limits is the existence of a culturally-mediated difference in the aesthetic musical preference between the judges and naïve participants who listened to the selected pieces. Indeed, while music evaluation resulting in the “touching and “joyful” characterization was performed by professional musicians that, as a result of their specific profession, have developed a quite positive aesthetic preference for classical music, naïve subjects (selected on the basis of their limited interest in music of whatever style) might have potentially found it boring or not interesting. Although aesthetics is based on liking or not an artwork, we assumed that touching and joyful music excerpts carried in their compositional structure some universal properties able to affect auditory processing of people not particularly skilled in music processing. The data strongly support this initial assumption, that music aesthetical preference is not only culturally, but also biologically based.</p>
</sec>
<sec disp-level="1">
<title>Additional Information</title>
<p>
<bold>How to cite this article</bold>
: Proverbio, A. M.
<italic>et al.</italic>
The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity.
<italic>Sci. Rep.</italic>
<bold>5</bold>
, 15219; doi: 10.1038/srep15219 (2015).</p>
</sec>
</body>
<back>
<ack>
<p>The authors are very grateful to all of the subjects for their generous participation and to the judges for kindly completing the initial survey and responding to the questionnaire on aesthetic musical preferences. Additionally, we wish to express our gratitude to M° Aldo Ceccato, Domenico Morgante, Andrea Pestalozza, Renato Rivolta, Mario Guido Scappucci and Luigi Verdi for their valuable suggestions on the selection of musical pieces.</p>
</ack>
<ref-list>
<ref id="b1">
<mixed-citation publication-type="journal">
<name>
<surname>Kämpfe</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Sedlmeier</surname>
<given-names>P.</given-names>
</name>
&
<name>
<surname>Renkewitz</surname>
<given-names>F.</given-names>
</name>
<article-title>The impact of background music on adult listeners: a meta-analysis</article-title>
.
<source>Psychol. Music.</source>
<volume>39</volume>
,
<fpage>424</fpage>
<lpage>448</lpage>
(
<year>2010</year>
).</mixed-citation>
</ref>
<ref id="b2">
<mixed-citation publication-type="journal">
<name>
<surname>Dalton</surname>
<given-names>B. H.</given-names>
</name>
&
<name>
<surname>Behm</surname>
<given-names>D. G.</given-names>
</name>
<article-title>Effects of noise and music on human and task performance: A systematic review. Occup</article-title>
.
<source>Ergonomics.</source>
<volume>7</volume>
,
<fpage>143</fpage>
<lpage>152</lpage>
(
<year>2007</year>
).</mixed-citation>
</ref>
<ref id="b3">
<mixed-citation publication-type="other">
<name>
<surname>Kahneman</surname>
<given-names>D.</given-names>
</name>
Attention and Effort. Englewood Cliffs. NJ: (Prentice-Hall 1973).</mixed-citation>
</ref>
<ref id="b4">
<mixed-citation publication-type="journal">
<name>
<surname>Furnham</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Allass</surname>
<given-names>K.</given-names>
</name>
<article-title>The influence of musical distraction of varying complexity on the cognitive performance of extroverts and introverts. Eur</article-title>
.
<source>J. Pers.</source>
<volume>13</volume>
(1),
<fpage>27</fpage>
<lpage>38</lpage>
(
<year>1999</year>
).</mixed-citation>
</ref>
<ref id="b5">
<mixed-citation publication-type="journal">
<name>
<surname>Thompson</surname>
<given-names>W. F.</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>E. G.</given-names>
</name>
&
<name>
<surname>Husain</surname>
<given-names>G.</given-names>
</name>
<article-title>Arousal, mood and the Mozart effect</article-title>
.
<source>Psychol. Sci.</source>
<volume>12</volume>
,
<fpage>248</fpage>
<lpage>251</lpage>
(
<year>2001</year>
).
<pub-id pub-id-type="pmid">11437309</pub-id>
</mixed-citation>
</ref>
<ref id="b6">
<mixed-citation publication-type="journal">
<name>
<surname>Cassidy</surname>
<given-names>G.</given-names>
</name>
&
<name>
<surname>MacDonald</surname>
<given-names>R. A. R.</given-names>
</name>
<article-title>The effect of background music and background noise on the task performance of introverts and extroverts</article-title>
.
<source>Psychol. Music.</source>
<volume>35</volume>
(3),
<fpage>515</fpage>
<lpage>537</lpage>
(
<year>2007</year>
).</mixed-citation>
</ref>
<ref id="b7">
<mixed-citation publication-type="journal">
<name>
<surname>Ferreri</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Aucouturier</surname>
<given-names>J. J.</given-names>
</name>
,
<name>
<surname>Muthalib</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Bigand</surname>
<given-names>E.</given-names>
</name>
&
<name>
<surname>Bugaiska</surname>
<given-names>A.</given-names>
</name>
<article-title>Music improves verbal memory encoding while decreasing prefrontal cortex activity: an fNIRS study</article-title>
.
<source>Front. Hum. Neurosci.</source>
<volume>7</volume>
,
<fpage>779</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">24339807</pub-id>
</mixed-citation>
</ref>
<ref id="b8">
<mixed-citation publication-type="journal">
<name>
<surname>El Haj</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Postal</surname>
<given-names>V.</given-names>
</name>
&
<name>
<surname>Allain</surname>
<given-names>P.</given-names>
</name>
<article-title>Music enhances autobiographical memory in mild Alzheimer’s disease</article-title>
.
<source>Educ. Gerontol.</source>
<volume>38</volume>
,
<fpage>30</fpage>
<lpage>41</lpage>
(
<year>2012</year>
).</mixed-citation>
</ref>
<ref id="b9">
<mixed-citation publication-type="journal">
<name>
<surname>Angel</surname>
<given-names>L. A.</given-names>
</name>
,
<name>
<surname>Polzella</surname>
<given-names>D. J.</given-names>
</name>
&
<name>
<surname>Elvers</surname>
<given-names>G. C.</given-names>
</name>
<article-title>Background music and cognitive performance</article-title>
.
<source>Percept. Mot. Skills.</source>
<volume>11</volume>
,
<fpage>1059</fpage>
<lpage>1064</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20865993</pub-id>
</mixed-citation>
</ref>
<ref id="b10">
<mixed-citation publication-type="journal">
<name>
<surname>Hallam</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Price</surname>
<given-names>J.</given-names>
</name>
&
<name>
<surname>Katsarou</surname>
<given-names>G.</given-names>
</name>
<article-title>The effect of background music on primary school pupils’ task performance</article-title>
.
<source>Educ. Stud.</source>
<volume>28</volume>
,
<fpage>111</fpage>
<lpage>122</lpage>
(
<year>2002</year>
).</mixed-citation>
</ref>
<ref id="b11">
<mixed-citation publication-type="journal">
<name>
<surname>Liu</surname>
<given-names>B.</given-names>
</name>
,
<name>
<surname>Huang</surname>
<given-names>Y.</given-names>
</name>
,
<name>
<surname>Wang</surname>
<given-names>Z.</given-names>
</name>
&
<name>
<surname>Wu</surname>
<given-names>G.</given-names>
</name>
<article-title>The influence of background music on recognition processes of Chinese characters: an ERP study</article-title>
.
<source>Neurosci Lett.</source>
<volume>518</volume>
(2),
<fpage>80</fpage>
<lpage>5</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22580199</pub-id>
</mixed-citation>
</ref>
<ref id="b12">
<mixed-citation publication-type="journal">
<name>
<surname>Kang</surname>
<given-names>H. J.</given-names>
</name>
&
<name>
<surname>Williamson</surname>
<given-names>J. W.</given-names>
</name>
<article-title>Background can aid second language learning</article-title>
.
<source>Psychol. Music.</source>
<volume>42</volume>
,
<fpage>728</fpage>
<lpage>747</lpage>
(
<year>2013</year>
).</mixed-citation>
</ref>
<ref id="b13">
<mixed-citation publication-type="journal">
<name>
<surname>Oakes</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>North</surname>
<given-names>A. C.</given-names>
</name>
<article-title>The impact of background musical tempo and timbre congruity upon ad content recall and affective response</article-title>
.
<source>Appl. Cognitive Psych.</source>
<volume>20</volume>
,
<fpage>505</fpage>
<lpage>520</lpage>
(
<year>2006</year>
).</mixed-citation>
</ref>
<ref id="b14">
<mixed-citation publication-type="journal">
<name>
<surname>Woo</surname>
<given-names>E. W.</given-names>
</name>
&
<name>
<surname>Kanachi</surname>
<given-names>M.</given-names>
</name>
<article-title>The effects of music type and volume on short-term memory</article-title>
.
<source>Tohoku Psychol. Folia.</source>
<volume>64</volume>
,
<fpage>68</fpage>
<lpage>76</lpage>
(
<year>2005</year>
).</mixed-citation>
</ref>
<ref id="b15">
<mixed-citation publication-type="journal">
<name>
<surname>Iwanaga</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Ito</surname>
<given-names>T.</given-names>
</name>
<article-title>Disturbance effect of music on processing of verbal and spatial memories</article-title>
.
<source>Percept. Mot. Skills.</source>
<volume>94</volume>
,
<fpage>1251</fpage>
<lpage>1258</lpage>
(
<year>2002</year>
).
<pub-id pub-id-type="pmid">12186247</pub-id>
</mixed-citation>
</ref>
<ref id="b16">
<mixed-citation publication-type="journal">
<name>
<surname>Furnham</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Allass</surname>
<given-names>K.</given-names>
</name>
<article-title>The influence of musical distraction of varying complexity on the cognitive performance of extroverts and introverts. Eur</article-title>
.
<source>J. Pers.</source>
<volume>13</volume>
,
<fpage>27</fpage>
<lpage>38</lpage>
(
<year>1999</year>
).</mixed-citation>
</ref>
<ref id="b17">
<mixed-citation publication-type="journal">
<name>
<surname>Bloor</surname>
<given-names>A.</given-names>
</name>
<article-title>The rhythm’s gonna get ya’—background music in primary classrooms and its effect on behaviour and attainment</article-title>
.
<source>J. Emot. Behav. Disord.</source>
<volume>14</volume>
,
<fpage>261</fpage>
<lpage>274</lpage>
(
<year>2009</year>
)</mixed-citation>
</ref>
<ref id="b18">
<mixed-citation publication-type="journal">
<name>
<surname>Avila</surname>
<given-names>C.</given-names>
</name>
,
<name>
<surname>Furnham</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>McClelland</surname>
<given-names>A.</given-names>
</name>
<article-title>The influence of distracting familiar vocal music on cognitive performance of introverts and extraverts Psychol</article-title>
.
<source>Music.</source>
<volume>40</volume>
,
<fpage>84</fpage>
<lpage>93</lpage>
(
<year>2011</year>
).</mixed-citation>
</ref>
<ref id="b19">
<mixed-citation publication-type="journal">
<name>
<surname>Miller</surname>
<given-names>L. K.</given-names>
</name>
&
<name>
<surname>Schyb</surname>
<given-names>M.</given-names>
</name>
<article-title>Facilitation and interference by background music</article-title>
.
<source>J. Music Ther.</source>
<volume>26</volume>
(1),
<fpage>42</fpage>
<lpage>54</lpage>
(
<year>1989</year>
).</mixed-citation>
</ref>
<ref id="b20">
<mixed-citation publication-type="journal">
<name>
<surname>Moreno</surname>
<given-names>R.</given-names>
</name>
&
<name>
<surname>Mayer</surname>
<given-names>R. E.</given-names>
</name>
<article-title>A coherence effect in multimedia learning: the case for minimizing irrelevant sounds in the design of multimedia instructional messages</article-title>
.
<source>J. Educ. Psychol.</source>
<volume>92</volume>
,
<fpage>117</fpage>
<lpage>125</lpage>
(
<year>2000</year>
).</mixed-citation>
</ref>
<ref id="b21">
<mixed-citation publication-type="journal">
<name>
<surname>Miskovic</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Rosenthal</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Zingg</surname>
<given-names>U.</given-names>
</name>
,
<name>
<surname>Oertli</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Metzger</surname>
<given-names>U.</given-names>
</name>
&
<name>
<surname>Jancke</surname>
<given-names>L.</given-names>
</name>
<article-title>Randomized controlled trial investigating the effect of music on the virtual reality laparoscopic learning performance of novice surgeons</article-title>
.
<source>Surg. Endosc.</source>
<volume>22</volume>
,
<fpage>2416</fpage>
<lpage>2420</lpage>
(
<year>2008</year>
).
<pub-id pub-id-type="pmid">18622551</pub-id>
</mixed-citation>
</ref>
<ref id="b22">
<mixed-citation publication-type="journal">
<name>
<surname>Kallinen</surname>
<given-names>K.</given-names>
</name>
<article-title>Reading news from a pocket computer in a distracting environment: Effects of the tempo of background music</article-title>
.
<source>Comput. Human Behav.</source>
<volume>18</volume>
,
<fpage>537</fpage>
<lpage>551</lpage>
(
<year>2002</year>
).</mixed-citation>
</ref>
<ref id="b23">
<mixed-citation publication-type="journal">
<name>
<surname>Thompson</surname>
<given-names>W. F.</given-names>
</name>
,
<name>
<surname>Schellenberg</surname>
<given-names>E. G.</given-names>
</name>
&
<name>
<surname>Letnic</surname>
<given-names>A. K.</given-names>
</name>
<article-title>Fast and loud background music disrupts reading comprehension</article-title>
.
<source>Psychol. Music.</source>
<volume>40</volume>
,
<fpage>1</fpage>
<lpage>9</lpage>
(
<year>2011</year>
).</mixed-citation>
</ref>
<ref id="b24">
<mixed-citation publication-type="journal">
<name>
<surname>Madsen</surname>
<given-names>C. K.</given-names>
</name>
<article-title>Background music: competition for focus of attention in Applications of Research</article-title>
in
<source>Music Behavior</source>
(eds.
<name>
<surname>Madsen</surname>
<given-names>C. K.</given-names>
</name>
&
<name>
<surname>Prickett</surname>
<given-names>C. A.</given-names>
</name>
)
<fpage>315</fpage>
<lpage>325</lpage>
(The University of Alabama Press,
<year>1987</year>
).</mixed-citation>
</ref>
<ref id="b25">
<mixed-citation publication-type="journal">
<name>
<surname>Parente</surname>
<given-names>J. A.</given-names>
</name>
<article-title>Music preference as a factor of music distraction</article-title>
.
<source>Percept. Mot. Skills.</source>
<volume>43</volume>
,
<fpage>337</fpage>
<lpage>338</lpage>
(
<year>1976</year>
).</mixed-citation>
</ref>
<ref id="b26">
<mixed-citation publication-type="journal">
<name>
<surname>Bottiroli</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Rosi</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Russo</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Vecchi</surname>
<given-names>T.</given-names>
</name>
&
<name>
<surname>Cavallini</surname>
<given-names>E.</given-names>
</name>
<article-title>The cognitive effects of listening to background music on older adults: processing speed improves with upbeat music, while memory seems to benefit from both upbeat and downbeat music</article-title>
.
<source>Front. Aging Neurosci.</source>
<pub-id pub-id-type="doi">10.3389/fnagi.2014.00284</pub-id>
(
<year>2014</year>
).</mixed-citation>
</ref>
<ref id="b27">
<mixed-citation publication-type="journal">
<name>
<surname>Reaves</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Graham</surname>
<given-names>B.</given-names>
</name>
,
<name>
<surname>Grahn</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Rabannifard</surname>
<given-names>P.</given-names>
</name>
&
<name>
<surname>Duarte</surname>
<given-names>A.</given-names>
</name>
<article-title>Turn Off the Music! Music Impairs Visual Associative Memory Performance in Older Adults</article-title>
.
<source>The Gerontologist</source>
,
<pub-id pub-id-type="doi">10.1093/geront/gnu113</pub-id>
(
<year>2015</year>
).</mixed-citation>
</ref>
<ref id="b28">
<mixed-citation publication-type="journal">
<name>
<surname>Mackrill</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Jennings</surname>
<given-names>P.</given-names>
</name>
&
<name>
<surname>Cain</surname>
<given-names>R.</given-names>
</name>
<article-title>Exploring positive hospital ward soundscape interventions</article-title>
.
<source>Appl. Ergon.</source>
<volume>45</volume>
(6),
<fpage>1454</fpage>
<lpage>60</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24768090</pub-id>
</mixed-citation>
</ref>
<ref id="b29">
<mixed-citation publication-type="journal">
<name>
<surname>Garza Villarreal</surname>
<given-names>E. A.</given-names>
</name>
,
<name>
<surname>Brattico</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Vase</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Østergaard</surname>
<given-names>L.</given-names>
</name>
&
<name>
<surname>Vuust</surname>
<given-names>P.</given-names>
</name>
(2012).
<article-title>Superior Analgesic Effect of an Active Distraction versus Pleasant Unfamiliar Sounds and Music: The Influence of Emotion and Cognitive Style</article-title>
.
<source>PLoS One</source>
<volume>7</volume>
(1),
<fpage>e29397</fpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22242169</pub-id>
</mixed-citation>
</ref>
<ref id="b30">
<mixed-citation publication-type="journal">
<name>
<surname>Steele</surname>
<given-names>K. M.</given-names>
</name>
,
<name>
<surname>Ball</surname>
<given-names>T. N.</given-names>
</name>
&
<name>
<surname>Runk</surname>
<given-names>R.</given-names>
</name>
<article-title>Listening to Mozart does not enhance backwards digit span performance</article-title>
.
<source>Percept. Mot. Skills.</source>
<volume>84</volume>
,
<fpage>1179</fpage>
<lpage>1184</lpage>
(
<year>1997</year>
).
<pub-id pub-id-type="pmid">9229433</pub-id>
</mixed-citation>
</ref>
<ref id="b31">
<mixed-citation publication-type="journal">
<name>
<surname>Radstaak</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Geurts</surname>
<given-names>S. A.</given-names>
</name>
,
<name>
<surname>Brosschot</surname>
<given-names>J. F.</given-names>
</name>
&
<name>
<surname>Kompier</surname>
<given-names>M. A.</given-names>
</name>
<article-title>Music and psychophysiological recovery from stress</article-title>
.
<source>Psychosom Med.</source>
<volume>76</volume>
(7),
<fpage>529</fpage>
<lpage>37</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">25153936</pub-id>
</mixed-citation>
</ref>
<ref id="b32">
<mixed-citation publication-type="journal">
<name>
<surname>Hilz</surname>
<given-names>M. J.</given-names>
</name>
,
<name>
<surname>Stadler</surname>
<given-names>P.</given-names>
</name>
,
<name>
<surname>Gryc</surname>
<given-names>T.</given-names>
</name>
,
<name>
<surname>Nath</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Habib-Romstoeck</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Stemper</surname>
<given-names>B.</given-names>
</name>
,
<name>
<surname>Buechner</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Wong</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Koehn</surname>
<given-names>J.</given-names>
</name>
<article-title>Music induces different cardiac autonomic arousal effects in young and older persons</article-title>
.
<source>Auton Neurosci.</source>
<volume>183</volume>
,
<fpage>83</fpage>
<lpage>93</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24636674</pub-id>
</mixed-citation>
</ref>
<ref id="b33">
<mixed-citation publication-type="journal">
<name>
<surname>Tan</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>Tengah</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Nee</surname>
<given-names>L. Y.</given-names>
</name>
&
<name>
<surname>Fredericks</surname>
<given-names>S.</given-names>
</name>
<article-title>A study of the effect of relaxing
<bold>music</bold>
on heart rate recovery after exercise among healthy students</article-title>
.
<source>Complement Ther Clin Pract.</source>
<volume>20</volume>
(2),
<fpage>114</fpage>
<lpage>7</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24767956</pub-id>
</mixed-citation>
</ref>
<ref id="b34">
<mixed-citation publication-type="journal">
<name>
<surname>Kraus</surname>
<given-names>N.</given-names>
</name>
&
<name>
<surname>Chandrasekaran</surname>
<given-names>B.</given-names>
</name>
<article-title>Music training for the development of auditory skills</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>11</volume>
(8),
<fpage>599</fpage>
<lpage>605</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20648064</pub-id>
</mixed-citation>
</ref>
<ref id="b35">
<mixed-citation publication-type="journal">
<name>
<surname>Spalek</surname>
<given-names>K.</given-names>
</name>
,
<name>
<surname>Fastenrath</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Ackermann</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Auschra</surname>
<given-names>B.</given-names>
</name>
,
<name>
<surname>Coynel</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Frey</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Gschwind</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Hartmann</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>van der Maarel</surname>
<given-names>N.</given-names>
</name>
,
<name>
<surname>Papassotiropoulos</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>de Quervain</surname>
<given-names>D.</given-names>
</name>
&
<name>
<surname>Milnik</surname>
<given-names>A.</given-names>
</name>
<article-title>Sex-dependent dissociation between emotional appraisal and memory: a large-scale behavioral and fMRI study</article-title>
.
<source>J Neurosci.</source>
<volume>35</volume>
(3),
<fpage>920</fpage>
<lpage>35</lpage>
(
<year>2015</year>
).
<pub-id pub-id-type="pmid">25609611</pub-id>
</mixed-citation>
</ref>
<ref id="b36">
<mixed-citation publication-type="journal">
<name>
<surname>Nater</surname>
<given-names>U. M.</given-names>
</name>
,
<name>
<surname>Abbruzzese</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Krebs</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Ehlert,</surname>
<given-names>U.</given-names>
</name>
<article-title>Sex differences in emotional and psychophysiological responses to musical stimuli</article-title>
.
<source>Int. J. Psychophysiol.</source>
<volume>62</volume>
(2),
<fpage>300</fpage>
<lpage>8</lpage>
(
<year>2006</year>
).
<pub-id pub-id-type="pmid">16828911</pub-id>
</mixed-citation>
</ref>
<ref id="b37">
<mixed-citation publication-type="other">
<name>
<surname>Proverbio</surname>
<given-names>A. M.</given-names>
</name>
&
<name>
<surname>Lozano Nasi</surname>
<given-names>V.</given-names>
</name>
(in revision) Sex differences in the evaluation of human faces along the arousal and valence dimensions. Cogn. Emot.</mixed-citation>
</ref>
<ref id="b38">
<mixed-citation publication-type="other">
<name>
<surname>Lang</surname>
<given-names>P. J.</given-names>
</name>
,
<name>
<surname>Bradley</surname>
<given-names>M. M.</given-names>
</name>
&
<name>
<surname>Cuthbert</surname>
<given-names>B. N.</given-names>
</name>
International Affective Picture System (IAPS): Technical Manual and Affective Ratings. (NIMH Center for the Study of Emotion and Attention,
<year>1997</year>
).</mixed-citation>
</ref>
<ref id="b39">
<mixed-citation publication-type="other">
<name>
<surname>Proverbio</surname>
<given-names>A. M.</given-names>
</name>
,
<name>
<surname>La Mastra</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>Adorni</surname>
<given-names>R.</given-names>
</name>
&
<name>
<surname>Zani</surname>
<given-names>A.</given-names>
</name>
How social bias (prejudice) affects memory for faces: An electrical neuroimaging study (Society for Neuroscience Abstracts, 2014 Annual Meeting of SFN, Washington, D.C.
<year>2014</year>
).</mixed-citation>
</ref>
<ref id="b40">
<mixed-citation publication-type="journal">
<name>
<surname>Yovel</surname>
<given-names>G.</given-names>
</name>
&
<name>
<surname>Paller</surname>
<given-names>K. A.</given-names>
</name>
<article-title>The neural basis of the butcher-on-the-bus phenomenon: when a face seems familiar but is not remembered</article-title>
.
<source>NeuroImage.</source>
<volume>21</volume>
,
<fpage>789</fpage>
<lpage>800</lpage>
(
<year>2004</year>
).
<pub-id pub-id-type="pmid">14980582</pub-id>
</mixed-citation>
</ref>
<ref id="b41">
<mixed-citation publication-type="journal">
<name>
<surname>Curran</surname>
<given-names>T.</given-names>
</name>
&
<name>
<surname>Hancock</surname>
<given-names>J.</given-names>
</name>
<article-title>The FN400 indexes familiarity-based recognition of face</article-title>
.
<source>Neuroimage.</source>
<volume>36</volume>
(2),
<fpage>464</fpage>
<lpage>471</lpage>
(
<year>2007</year>
).
<pub-id pub-id-type="pmid">17258471</pub-id>
</mixed-citation>
</ref>
<ref id="b42">
<mixed-citation publication-type="journal">
<name>
<surname>Jäncke</surname>
<given-names>L.</given-names>
</name>
<article-title>Music, memory and emotion</article-title>
.
<source>J. Biol.</source>
7:21 10.1186/jbiol82 (
<year>2008</year>
).</mixed-citation>
</ref>
<ref id="b43">
<mixed-citation publication-type="journal">
<name>
<surname>Gold</surname>
<given-names>B. P.</given-names>
</name>
,
<name>
<surname>Frankm</surname>
<given-names>M. J.</given-names>
</name>
,
<name>
<surname>Bogertm</surname>
<given-names>B.</given-names>
</name>
&
<name>
<surname>Brattico</surname>
<given-names>E.</given-names>
</name>
<article-title>Pleasurable music affects reinforcement learning according to the listener</article-title>
.
<source>Front Psychol.</source>
<volume>21</volume>
, 4,
<fpage>541</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23970875</pub-id>
</mixed-citation>
</ref>
<ref id="b44">
<mixed-citation publication-type="other">
<name>
<surname>Cohen</surname>
<given-names>A. J.</given-names>
</name>
<article-title>Music as a source of emotion in film In Music and emotion: Theory and research</article-title>
(eds.
<name>
<surname>Juslin</surname>
<given-names>P. N.</given-names>
</name>
&
<name>
<surname>Sloboda</surname>
<given-names>J. A.</given-names>
</name>
)
<fpage>249</fpage>
<lpage>272</lpage>
(Oxford University Press,
<year>2001</year>
).</mixed-citation>
</ref>
<ref id="b45">
<mixed-citation publication-type="journal">
<name>
<surname>Iwamiya</surname>
<given-names>S.</given-names>
</name>
<article-title>Interaction between auditory and visual processing when listening to music in an audio visual context</article-title>
.
<source>Psychomusicology.</source>
<volume>13</volume>
,
<fpage>133</fpage>
<lpage>53</lpage>
(
<year>1994</year>
).</mixed-citation>
</ref>
<ref id="b46">
<mixed-citation publication-type="other">
<name>
<surname>Sloboda</surname>
<given-names>I. A.</given-names>
</name>
<article-title>Empirical studies of emotional response to music In Cognitive bases of musical communication</article-title>
(eds.
<name>
<surname>Iones</surname>
<given-names>M. R.</given-names>
</name>
&
<name>
<surname>Holleran</surname>
<given-names>S.</given-names>
</name>
),
<fpage>33</fpage>
<lpage>46</lpage>
(Washington, DC, American Psychological Association,
<year>1992</year>
)</mixed-citation>
</ref>
<ref id="b47">
<mixed-citation publication-type="journal">
<name>
<surname>Rigg</surname>
<given-names>M. G.</given-names>
</name>
<article-title>The mood effects of music: A comparison of data from four investigations</article-title>
.
<source>J. Psychol.</source>
<volume>58</volume>
,
<fpage>427</fpage>
<lpage>38</lpage>
(
<year>1964</year>
).</mixed-citation>
</ref>
<ref id="b48">
<mixed-citation publication-type="journal">
<name>
<surname>Thayer</surname>
<given-names>J. F.</given-names>
</name>
&
<name>
<surname>Levenson</surname>
<given-names>R.</given-names>
</name>
<article-title>Effects of music on psychophysiological responses to a stressful film</article-title>
.
<source>Psychomusicology.</source>
<volume>3</volume>
,
<fpage>44</fpage>
<lpage>54</lpage>
(
<year>1983</year>
).</mixed-citation>
</ref>
<ref id="b49">
<mixed-citation publication-type="journal">
<name>
<surname>Kalinak</surname>
<given-names>K.</given-names>
</name>
<article-title>Settling the score</article-title>
.
<source>Madison, WI</source>
(University of Wisconsin Press,
<year>1992</year>
).</mixed-citation>
</ref>
<ref id="b50">
<mixed-citation publication-type="journal">
<name>
<surname>Baumgartner</surname>
<given-names>T.</given-names>
</name>
,
<name>
<surname>Lutz</surname>
<given-names>K.</given-names>
</name>
,
<name>
<surname>Schmidt</surname>
<given-names>C. F.</given-names>
</name>
&
<name>
<surname>Jäncke</surname>
<given-names>L.</given-names>
</name>
<article-title>The emotional power of music: how music enhances the feeling of affective pictures,</article-title>
<source>Brain Res.</source>
<volume>1075</volume>
(1),
<fpage>151</fpage>
<lpage>64</lpage>
(
<year>2006</year>
).
<pub-id pub-id-type="pmid">16458860</pub-id>
</mixed-citation>
</ref>
<ref id="b51">
<mixed-citation publication-type="journal">
<name>
<surname>Gerdes</surname>
<given-names>A. B. M.</given-names>
</name>
,
<name>
<surname>Wieser</surname>
<given-names>M. J.</given-names>
</name>
,
<name>
<surname>Bublatzky</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>Kusay</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Plichta</surname>
<given-names>M. M.</given-names>
</name>
&
<name>
<surname>Alpers</surname>
<given-names>G. W.</given-names>
</name>
<article-title>Emotional sounds modulate early neural processing of emotional pictures</article-title>
.
<source>Front. Psychol.</source>
<volume>4</volume>
,
<fpage>741</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">24151476</pub-id>
</mixed-citation>
</ref>
<ref id="b52">
<mixed-citation publication-type="journal">
<name>
<surname>Jomori</surname>
<given-names>I.</given-names>
</name>
,
<name>
<surname>Hoshiyama</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Uemura</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Nakagawa</surname>
<given-names>Y.</given-names>
</name>
,
<name>
<surname>Hoshino</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Iwamoto</surname>
<given-names>Y.</given-names>
</name>
<article-title>Effects of emotional music on visual processes in inferior temporal area</article-title>
.
<source>Cogn. Neurosci.</source>
<volume>4</volume>
(1),
<fpage>21</fpage>
<lpage>30</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">24073696</pub-id>
</mixed-citation>
</ref>
<ref id="b53">
<mixed-citation publication-type="journal">
<name>
<surname>Hanser</surname>
<given-names>W. E.</given-names>
</name>
,
<name>
<surname>Mark</surname>
<given-names>R. E.</given-names>
</name>
,
<name>
<surname>Zijlstra</surname>
<given-names>W. P.</given-names>
</name>
&
<name>
<surname>Vingerhoets</surname>
<given-names>Ad J. J. M.</given-names>
</name>
<article-title>The effects of background music on the evaluation of crying faces</article-title>
.
<source>Psychol. Music.</source>
<volume>43</volume>
,
<fpage>75</fpage>
<lpage>85</lpage>
(
<year>2015</year>
).</mixed-citation>
</ref>
<ref id="b54">
<mixed-citation publication-type="journal">
<name>
<surname>Jolij</surname>
<given-names>J.</given-names>
</name>
&
<name>
<surname>Meurs</surname>
<given-names>M.</given-names>
</name>
<article-title>Music alters visual perception</article-title>
.
<source>PLoS One.</source>
<volume>21</volume>
, 6(4),
<fpage>e18861</fpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21533041</pub-id>
</mixed-citation>
</ref>
<ref id="b55">
<mixed-citation publication-type="journal">
<name>
<surname>Rule</surname>
<given-names>N. O.</given-names>
</name>
,
<name>
<surname>Slepian</surname>
<given-names>M. L.</given-names>
</name>
&
<name>
<surname>Ambady</surname>
<given-names>N.</given-names>
</name>
<article-title>A Memory advantage for untrustworthy faces</article-title>
.
<source>Cognition.</source>
<volume>125</volume>
,
<fpage>207</fpage>
<lpage>218</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22874071</pub-id>
</mixed-citation>
</ref>
<ref id="b56">
<mixed-citation publication-type="journal">
<name>
<surname>Bell</surname>
<given-names>R.</given-names>
</name>
&
<name>
<surname>Buchner</surname>
<given-names>A.</given-names>
</name>
<article-title>Valence modulates source memory for faces</article-title>
.
<source>Mem. Cognit.</source>
<volume>38</volume>
,
<fpage>29</fpage>
<lpage>41</lpage>
(
<year>2010</year>
).</mixed-citation>
</ref>
<ref id="b57">
<mixed-citation publication-type="journal">
<name>
<surname>Johansson</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Mecklinger</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Treese</surname>
<given-names>A. C.</given-names>
</name>
<article-title>Recognition memory for emotional and neutral faces: an event-related potential study</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>16</volume>
(10),
<fpage>1840</fpage>
<lpage>1853</lpage>
(
<year>2004</year>
).
<pub-id pub-id-type="pmid">15701233</pub-id>
</mixed-citation>
</ref>
<ref id="b58">
<mixed-citation publication-type="journal">
<name>
<surname>Keightley</surname>
<given-names>M. L.</given-names>
</name>
,
<name>
<surname>Chiew</surname>
<given-names>K. S.</given-names>
</name>
,
<name>
<surname>Anderson</surname>
<given-names>J. A. E.</given-names>
</name>
&
<name>
<surname>Grady</surname>
<given-names>C. L.</given-names>
</name>
<article-title>Neural correlates of recognition memory for emotional faces and scenes</article-title>
.
<source>Soc. Cogn. Affect. Neurosci.</source>
<volume>6</volume>
(1),
<fpage>24</fpage>
<lpage>37</lpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">20194514</pub-id>
</mixed-citation>
</ref>
<ref id="b59">
<mixed-citation publication-type="journal">
<name>
<surname>Chen</surname>
<given-names>H. J.</given-names>
</name>
,
<name>
<surname>Chen</surname>
<given-names>T. Y.</given-names>
</name>
,
<name>
<surname>Huang</surname>
<given-names>C. Y.</given-names>
</name>
,
<name>
<surname>Hsieh</surname>
<given-names>Y. M.</given-names>
</name>
&
<name>
<surname>Lai</surname>
<given-names>H. L.</given-names>
</name>
<article-title>Effects of music on psychophysiological responses and opioid dosage in patients undergoing total knee replacement surgery. Jpn. J. Nurs</article-title>
.
<source>Sci., Mar 9.</source>
<pub-id pub-id-type="doi">10.1111/jjns.12070</pub-id>
(
<year>2015</year>
).</mixed-citation>
</ref>
<ref id="b60">
<mixed-citation publication-type="journal">
<name>
<surname>Tan</surname>
<given-names>Y. Z.</given-names>
</name>
,
<name>
<surname>Ozdemir</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Temiz</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Celik</surname>
<given-names>F.</given-names>
</name>
<article-title>The effect of relaxing music on heart rate and heart rate variability during ECG GATED-myocardial perfusion scintigraphy</article-title>
.
<source>Complement. Ther. Clin. Pract.</source>
, Feb 14. pii: S1744-3881(15)00002-X (
<year>2015</year>
).</mixed-citation>
</ref>
<ref id="b61">
<mixed-citation publication-type="journal">
<name>
<surname>Tsuchiya</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Asada</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Ryo</surname>
<given-names>K.</given-names>
</name>
,
<name>
<surname>Noda</surname>
<given-names>K.</given-names>
</name>
,
<name>
<surname>Hashino</surname>
<given-names>T.</given-names>
</name>
,
<name>
<surname>Sato</surname>
<given-names>Y.</given-names>
</name>
,
<name>
<surname>Sato</surname>
<given-names>E. F.</given-names>
</name>
&
<name>
<surname>Inoue</surname>
<given-names>M.</given-names>
</name>
<article-title>Relaxing intraoperative natural sound blunts haemodynamic change at the emergence from propofol general anaesthesia and increases the acceptability of anaesthesia to the patient</article-title>
.
<source>Acta Anaesthesiol. Scand.</source>
<volume>47</volume>
(8),
<fpage>939</fpage>
<lpage>43</lpage>
(
<year>2003</year>
).
<pub-id pub-id-type="pmid">12904184</pub-id>
</mixed-citation>
</ref>
<ref id="b62">
<mixed-citation publication-type="journal">
<name>
<surname>Rauscher</surname>
<given-names>F. H.</given-names>
</name>
,
<name>
<surname>Shaw</surname>
<given-names>G. L.</given-names>
</name>
&
<name>
<surname>Ky</surname>
<given-names>K. N.</given-names>
</name>
<article-title>Music and spatial task performance</article-title>
.
<source>Nature</source>
<volume>365</volume>
,
<fpage>611</fpage>
(
<year>1993</year>
).
<pub-id pub-id-type="pmid">8413624</pub-id>
</mixed-citation>
</ref>
<ref id="b63">
<mixed-citation publication-type="journal">
<name>
<surname>Quarto</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Blasi</surname>
<given-names>G.</given-names>
</name>
,
<name>
<surname>Pallasen</surname>
<given-names>K. J.</given-names>
</name>
,
<name>
<surname>Bertolino</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Brattico,</surname>
<given-names>E.</given-names>
</name>
<article-title>Implicit Processing of Visual Emotions Is Affected by Sound-Induced Affective States and Individual Affective Traits</article-title>
.
<source>PLoS ONE</source>
<volume>9</volume>
(7),
<fpage>e103278</fpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">25072162</pub-id>
</mixed-citation>
</ref>
<ref id="b64">
<mixed-citation publication-type="journal">
<name>
<surname>Etzel</surname>
<given-names>J. A.</given-names>
</name>
,
<name>
<surname>Johnsen</surname>
<given-names>E. L.</given-names>
</name>
,
<name>
<surname>Dickerson</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Tranel</surname>
<given-names>D.</given-names>
</name>
&
<name>
<surname>Adolphs</surname>
<given-names>R.</given-names>
</name>
<article-title>Cardiovascular and respiratory responses during musical mood induction</article-title>
.
<source>Int. J. Psychophysiol.</source>
<volume>61</volume>
(1)
<fpage>57</fpage>
<lpage>69</lpage>
(
<year>2006</year>
).
<pub-id pub-id-type="pmid">16460823</pub-id>
</mixed-citation>
</ref>
<ref id="b65">
<mixed-citation publication-type="journal">
<name>
<surname>Khalfa</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Roy</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Rainville</surname>
<given-names>P.</given-names>
</name>
,
<name>
<surname>Dalla Bella</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Peretz</surname>
<given-names>I.</given-names>
</name>
<article-title>Role of tempo entrainment in psychophysiological differentiation of happy and sad music?</article-title>
<source>Int. J. Psychophysiol.</source>
<volume>68</volume>
(1),
<fpage>17</fpage>
<lpage>26</lpage>
(
<year>2008</year>
).
<pub-id pub-id-type="pmid">18234381</pub-id>
</mixed-citation>
</ref>
<ref id="b66">
<mixed-citation publication-type="journal">
<name>
<surname>Withvliet</surname>
<given-names>C. V. O.</given-names>
</name>
&
<name>
<surname>Vrana</surname>
<given-names>S. R.</given-names>
</name>
<article-title>Play it again Sam: Repeated exposure to emotionally evocative music polarises liking and smiling responses, and influences other affective reports, facial EMG, and heart rate</article-title>
.
<source>Cogn. Emot.</source>
<volume>21</volume>
(1),
<fpage>1</fpage>
<lpage>23</lpage>
(
<year>2006</year>
).</mixed-citation>
</ref>
</ref-list>
<fn-group>
<fn>
<p>
<bold>Author Contributions</bold>
A.M.P: Conception and design; Analysis and interpretation of data; Wrote the paper. V.L.N., L.A.A., F.D., M.G. and M.G.: Acquisition and analysis of data; A.Z.: Contributed useful comments to an earlier version of manuscript; revised the article.</p>
</fn>
</fn-group>
</back>
<floats-group>
<fig id="f1">
<label>Figure 1</label>
<caption>
<title>Schematic of the experimental paradigm, which included two sessions of face encoding and memory tasks.</title>
<p>AMP and AZ contributed to the drawing of this figure.</p>
</caption>
<graphic xlink:href="srep15219-f1"></graphic>
</fig>
<fig id="f2">
<label>Figure 2</label>
<caption>
<title>Hit percentages in the memory test as a function of auditory background during the study session.</title>
<p>Nonverbal episodic memory recall was enhanced when study occurred either in silence or in the presence of emotionally touching music. Although women exhibited better performance on the test, especially while listening to music, the difference was not significant.</p>
</caption>
<graphic xlink:href="srep15219-f2"></graphic>
</fig>
<fig id="f3">
<label>Figure 3</label>
<caption>
<title>Hit percentages in the memory test.</title>
<p>The data indicate that it was much easier for the participants to recognize new faces compared to old faces. Additionally, the participants were very successful at the task, despite the large number of faces that had to be remembered.</p>
</caption>
<graphic xlink:href="srep15219-f3"></graphic>
</fig>
<fig id="f4">
<label>Figure 4</label>
<caption>
<title>Mean RTs for correctly recognized old faces as a function of the auditory background present during the study session.</title>
<p>Nonverbal episodic memory recall was significantly faster if study occurred in silent conditions or in conditions of emotionally touching music.</p>
</caption>
<graphic xlink:href="srep15219-f4"></graphic>
</fig>
<fig id="f5">
<label>Figure 5</label>
<caption>
<title>Mean heart rate (beats per minute) measurements recorded during different auditory background conditions.</title>
<p>Participants exhibited significantly faster heart rates while listening to music (especially emotionally touching music) compared to rain sounds or silence. The intensity of auditory background (in dB) was matched across conditions and therefore the changes in heart rate possibly reflected increased cognitive and emotional processing.</p>
</caption>
<graphic xlink:href="srep15219-f5"></graphic>
</fig>
<fig id="f6">
<label>Figure 6</label>
<caption>
<title>Diastolic (minim) blood pressure (diaBDP) values recorded as a function of auditory background.</title>
<p>Music listening tended to increase diaBDP (p < 0.08).</p>
</caption>
<graphic xlink:href="srep15219-f6"></graphic>
</fig>
<fig id="f7">
<label>Figure 7</label>
<caption>
<title>Systolic (maximal) blood pressure (sysBDP) values recorded as a function of auditory background.</title>
<p>No effect was found between listening to music or rain sounds compared to silence.</p>
</caption>
<graphic xlink:href="srep15219-f7"></graphic>
</fig>
</floats-group>
</pmc>
<affiliations>
<list>
<country>
<li>Italie</li>
</country>
</list>
<tree>
<noCountry>
<name sortKey="Zani, Alberto" sort="Zani, Alberto" uniqKey="Zani A" first="Alberto" last="Zani">Alberto Zani</name>
</noCountry>
<country name="Italie">
<noRegion>
<name sortKey="Mado Proverbio, C A Alice" sort="Mado Proverbio, C A Alice" uniqKey="Mado Proverbio C" first="C. A. Alice" last="Mado Proverbio">C. A. Alice Mado Proverbio</name>
</noRegion>
<name sortKey="Alessandra Arcari, Laura" sort="Alessandra Arcari, Laura" uniqKey="Alessandra Arcari L" first="Laura" last="Alessandra Arcari">Laura Alessandra Arcari</name>
<name sortKey="De Benedetto, Francesco" sort="De Benedetto, Francesco" uniqKey="De Benedetto F" first="Francesco" last="De Benedetto">Francesco De Benedetto</name>
<name sortKey="Gazzola, Martina" sort="Gazzola, Martina" uniqKey="Gazzola M" first="Martina" last="Gazzola">Martina Gazzola</name>
<name sortKey="Guardamagna, Matteo" sort="Guardamagna, Matteo" uniqKey="Guardamagna M" first="Matteo" last="Guardamagna">Matteo Guardamagna</name>
<name sortKey="Lozano Nasi, Valentina" sort="Lozano Nasi, Valentina" uniqKey="Lozano Nasi V" first="Valentina" last="Lozano Nasi">Valentina Lozano Nasi</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/OperaV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000026 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000026 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    OperaV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4606564
   |texte=   The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:26469712" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a OperaV1 

Wicri

This area was generated with Dilib version V0.6.21.
Data generation: Thu Apr 14 14:59:05 2016. Site generation: Thu Jan 4 23:09:23 2024